repo_name
stringlengths 6
112
| path
stringlengths 4
204
| copies
stringlengths 1
3
| size
stringlengths 4
6
| content
stringlengths 714
810k
| license
stringclasses 15
values |
---|---|---|---|---|---|
shyamalschandra/scikit-learn | examples/feature_selection/plot_select_from_model_boston.py | 146 | 1527 | """
===================================================
Feature selection using SelectFromModel and LassoCV
===================================================
Use SelectFromModel meta-transformer along with Lasso to select the best
couple of features from the Boston dataset.
"""
# Author: Manoj Kumar <[email protected]>
# License: BSD 3 clause
print(__doc__)
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import load_boston
from sklearn.feature_selection import SelectFromModel
from sklearn.linear_model import LassoCV
# Load the boston dataset.
boston = load_boston()
X, y = boston['data'], boston['target']
# We use the base estimator LassoCV since the L1 norm promotes sparsity of features.
clf = LassoCV()
# Set a minimum threshold of 0.25
sfm = SelectFromModel(clf, threshold=0.25)
sfm.fit(X, y)
n_features = sfm.transform(X).shape[1]
# Reset the threshold till the number of features equals two.
# Note that the attribute can be set directly instead of repeatedly
# fitting the metatransformer.
while n_features > 2:
sfm.threshold += 0.1
X_transform = sfm.transform(X)
n_features = X_transform.shape[1]
# Plot the selected two features from X.
plt.title(
"Features selected from Boston using SelectFromModel with "
"threshold %0.3f." % sfm.threshold)
feature1 = X_transform[:, 0]
feature2 = X_transform[:, 1]
plt.plot(feature1, feature2, 'r.')
plt.xlabel("Feature number 1")
plt.ylabel("Feature number 2")
plt.ylim([np.min(feature2), np.max(feature2)])
plt.show()
| bsd-3-clause |
lazywei/scikit-learn | examples/cluster/plot_segmentation_toy.py | 258 | 3336 | """
===========================================
Spectral clustering for image segmentation
===========================================
In this example, an image with connected circles is generated and
spectral clustering is used to separate the circles.
In these settings, the :ref:`spectral_clustering` approach solves the problem
know as 'normalized graph cuts': the image is seen as a graph of
connected voxels, and the spectral clustering algorithm amounts to
choosing graph cuts defining regions while minimizing the ratio of the
gradient along the cut, and the volume of the region.
As the algorithm tries to balance the volume (ie balance the region
sizes), if we take circles with different sizes, the segmentation fails.
In addition, as there is no useful information in the intensity of the image,
or its gradient, we choose to perform the spectral clustering on a graph
that is only weakly informed by the gradient. This is close to performing
a Voronoi partition of the graph.
In addition, we use the mask of the objects to restrict the graph to the
outline of the objects. In this example, we are interested in
separating the objects one from the other, and not from the background.
"""
print(__doc__)
# Authors: Emmanuelle Gouillart <[email protected]>
# Gael Varoquaux <[email protected]>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.feature_extraction import image
from sklearn.cluster import spectral_clustering
###############################################################################
l = 100
x, y = np.indices((l, l))
center1 = (28, 24)
center2 = (40, 50)
center3 = (67, 58)
center4 = (24, 70)
radius1, radius2, radius3, radius4 = 16, 14, 15, 14
circle1 = (x - center1[0]) ** 2 + (y - center1[1]) ** 2 < radius1 ** 2
circle2 = (x - center2[0]) ** 2 + (y - center2[1]) ** 2 < radius2 ** 2
circle3 = (x - center3[0]) ** 2 + (y - center3[1]) ** 2 < radius3 ** 2
circle4 = (x - center4[0]) ** 2 + (y - center4[1]) ** 2 < radius4 ** 2
###############################################################################
# 4 circles
img = circle1 + circle2 + circle3 + circle4
mask = img.astype(bool)
img = img.astype(float)
img += 1 + 0.2 * np.random.randn(*img.shape)
# Convert the image into a graph with the value of the gradient on the
# edges.
graph = image.img_to_graph(img, mask=mask)
# Take a decreasing function of the gradient: we take it weakly
# dependent from the gradient the segmentation is close to a voronoi
graph.data = np.exp(-graph.data / graph.data.std())
# Force the solver to be arpack, since amg is numerically
# unstable on this example
labels = spectral_clustering(graph, n_clusters=4, eigen_solver='arpack')
label_im = -np.ones(mask.shape)
label_im[mask] = labels
plt.matshow(img)
plt.matshow(label_im)
###############################################################################
# 2 circles
img = circle1 + circle2
mask = img.astype(bool)
img = img.astype(float)
img += 1 + 0.2 * np.random.randn(*img.shape)
graph = image.img_to_graph(img, mask=mask)
graph.data = np.exp(-graph.data / graph.data.std())
labels = spectral_clustering(graph, n_clusters=2, eigen_solver='arpack')
label_im = -np.ones(mask.shape)
label_im[mask] = labels
plt.matshow(img)
plt.matshow(label_im)
plt.show()
| bsd-3-clause |
abalkin/numpy | numpy/lib/twodim_base.py | 6 | 27555 | """ Basic functions for manipulating 2d arrays
"""
import functools
from numpy.core.numeric import (
asanyarray, arange, zeros, greater_equal, multiply, ones,
asarray, where, int8, int16, int32, int64, empty, promote_types, diagonal,
nonzero
)
from numpy.core.overrides import set_module
from numpy.core import overrides
from numpy.core import iinfo
__all__ = [
'diag', 'diagflat', 'eye', 'fliplr', 'flipud', 'tri', 'triu',
'tril', 'vander', 'histogram2d', 'mask_indices', 'tril_indices',
'tril_indices_from', 'triu_indices', 'triu_indices_from', ]
array_function_dispatch = functools.partial(
overrides.array_function_dispatch, module='numpy')
i1 = iinfo(int8)
i2 = iinfo(int16)
i4 = iinfo(int32)
def _min_int(low, high):
""" get small int that fits the range """
if high <= i1.max and low >= i1.min:
return int8
if high <= i2.max and low >= i2.min:
return int16
if high <= i4.max and low >= i4.min:
return int32
return int64
def _flip_dispatcher(m):
return (m,)
@array_function_dispatch(_flip_dispatcher)
def fliplr(m):
"""
Flip array in the left/right direction.
Flip the entries in each row in the left/right direction.
Columns are preserved, but appear in a different order than before.
Parameters
----------
m : array_like
Input array, must be at least 2-D.
Returns
-------
f : ndarray
A view of `m` with the columns reversed. Since a view
is returned, this operation is :math:`\\mathcal O(1)`.
See Also
--------
flipud : Flip array in the up/down direction.
rot90 : Rotate array counterclockwise.
Notes
-----
Equivalent to m[:,::-1]. Requires the array to be at least 2-D.
Examples
--------
>>> A = np.diag([1.,2.,3.])
>>> A
array([[1., 0., 0.],
[0., 2., 0.],
[0., 0., 3.]])
>>> np.fliplr(A)
array([[0., 0., 1.],
[0., 2., 0.],
[3., 0., 0.]])
>>> A = np.random.randn(2,3,5)
>>> np.all(np.fliplr(A) == A[:,::-1,...])
True
"""
m = asanyarray(m)
if m.ndim < 2:
raise ValueError("Input must be >= 2-d.")
return m[:, ::-1]
@array_function_dispatch(_flip_dispatcher)
def flipud(m):
"""
Flip array in the up/down direction.
Flip the entries in each column in the up/down direction.
Rows are preserved, but appear in a different order than before.
Parameters
----------
m : array_like
Input array.
Returns
-------
out : array_like
A view of `m` with the rows reversed. Since a view is
returned, this operation is :math:`\\mathcal O(1)`.
See Also
--------
fliplr : Flip array in the left/right direction.
rot90 : Rotate array counterclockwise.
Notes
-----
Equivalent to ``m[::-1,...]``.
Does not require the array to be two-dimensional.
Examples
--------
>>> A = np.diag([1.0, 2, 3])
>>> A
array([[1., 0., 0.],
[0., 2., 0.],
[0., 0., 3.]])
>>> np.flipud(A)
array([[0., 0., 3.],
[0., 2., 0.],
[1., 0., 0.]])
>>> A = np.random.randn(2,3,5)
>>> np.all(np.flipud(A) == A[::-1,...])
True
>>> np.flipud([1,2])
array([2, 1])
"""
m = asanyarray(m)
if m.ndim < 1:
raise ValueError("Input must be >= 1-d.")
return m[::-1, ...]
@set_module('numpy')
def eye(N, M=None, k=0, dtype=float, order='C'):
"""
Return a 2-D array with ones on the diagonal and zeros elsewhere.
Parameters
----------
N : int
Number of rows in the output.
M : int, optional
Number of columns in the output. If None, defaults to `N`.
k : int, optional
Index of the diagonal: 0 (the default) refers to the main diagonal,
a positive value refers to an upper diagonal, and a negative value
to a lower diagonal.
dtype : data-type, optional
Data-type of the returned array.
order : {'C', 'F'}, optional
Whether the output should be stored in row-major (C-style) or
column-major (Fortran-style) order in memory.
.. versionadded:: 1.14.0
Returns
-------
I : ndarray of shape (N,M)
An array where all elements are equal to zero, except for the `k`-th
diagonal, whose values are equal to one.
See Also
--------
identity : (almost) equivalent function
diag : diagonal 2-D array from a 1-D array specified by the user.
Examples
--------
>>> np.eye(2, dtype=int)
array([[1, 0],
[0, 1]])
>>> np.eye(3, k=1)
array([[0., 1., 0.],
[0., 0., 1.],
[0., 0., 0.]])
"""
if M is None:
M = N
m = zeros((N, M), dtype=dtype, order=order)
if k >= M:
return m
if k >= 0:
i = k
else:
i = (-k) * M
m[:M-k].flat[i::M+1] = 1
return m
def _diag_dispatcher(v, k=None):
return (v,)
@array_function_dispatch(_diag_dispatcher)
def diag(v, k=0):
"""
Extract a diagonal or construct a diagonal array.
See the more detailed documentation for ``numpy.diagonal`` if you use this
function to extract a diagonal and wish to write to the resulting array;
whether it returns a copy or a view depends on what version of numpy you
are using.
Parameters
----------
v : array_like
If `v` is a 2-D array, return a copy of its `k`-th diagonal.
If `v` is a 1-D array, return a 2-D array with `v` on the `k`-th
diagonal.
k : int, optional
Diagonal in question. The default is 0. Use `k>0` for diagonals
above the main diagonal, and `k<0` for diagonals below the main
diagonal.
Returns
-------
out : ndarray
The extracted diagonal or constructed diagonal array.
See Also
--------
diagonal : Return specified diagonals.
diagflat : Create a 2-D array with the flattened input as a diagonal.
trace : Sum along diagonals.
triu : Upper triangle of an array.
tril : Lower triangle of an array.
Examples
--------
>>> x = np.arange(9).reshape((3,3))
>>> x
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
>>> np.diag(x)
array([0, 4, 8])
>>> np.diag(x, k=1)
array([1, 5])
>>> np.diag(x, k=-1)
array([3, 7])
>>> np.diag(np.diag(x))
array([[0, 0, 0],
[0, 4, 0],
[0, 0, 8]])
"""
v = asanyarray(v)
s = v.shape
if len(s) == 1:
n = s[0]+abs(k)
res = zeros((n, n), v.dtype)
if k >= 0:
i = k
else:
i = (-k) * n
res[:n-k].flat[i::n+1] = v
return res
elif len(s) == 2:
return diagonal(v, k)
else:
raise ValueError("Input must be 1- or 2-d.")
@array_function_dispatch(_diag_dispatcher)
def diagflat(v, k=0):
"""
Create a two-dimensional array with the flattened input as a diagonal.
Parameters
----------
v : array_like
Input data, which is flattened and set as the `k`-th
diagonal of the output.
k : int, optional
Diagonal to set; 0, the default, corresponds to the "main" diagonal,
a positive (negative) `k` giving the number of the diagonal above
(below) the main.
Returns
-------
out : ndarray
The 2-D output array.
See Also
--------
diag : MATLAB work-alike for 1-D and 2-D arrays.
diagonal : Return specified diagonals.
trace : Sum along diagonals.
Examples
--------
>>> np.diagflat([[1,2], [3,4]])
array([[1, 0, 0, 0],
[0, 2, 0, 0],
[0, 0, 3, 0],
[0, 0, 0, 4]])
>>> np.diagflat([1,2], 1)
array([[0, 1, 0],
[0, 0, 2],
[0, 0, 0]])
"""
try:
wrap = v.__array_wrap__
except AttributeError:
wrap = None
v = asarray(v).ravel()
s = len(v)
n = s + abs(k)
res = zeros((n, n), v.dtype)
if (k >= 0):
i = arange(0, n-k)
fi = i+k+i*n
else:
i = arange(0, n+k)
fi = i+(i-k)*n
res.flat[fi] = v
if not wrap:
return res
return wrap(res)
@set_module('numpy')
def tri(N, M=None, k=0, dtype=float):
"""
An array with ones at and below the given diagonal and zeros elsewhere.
Parameters
----------
N : int
Number of rows in the array.
M : int, optional
Number of columns in the array.
By default, `M` is taken equal to `N`.
k : int, optional
The sub-diagonal at and below which the array is filled.
`k` = 0 is the main diagonal, while `k` < 0 is below it,
and `k` > 0 is above. The default is 0.
dtype : dtype, optional
Data type of the returned array. The default is float.
Returns
-------
tri : ndarray of shape (N, M)
Array with its lower triangle filled with ones and zero elsewhere;
in other words ``T[i,j] == 1`` for ``j <= i + k``, 0 otherwise.
Examples
--------
>>> np.tri(3, 5, 2, dtype=int)
array([[1, 1, 1, 0, 0],
[1, 1, 1, 1, 0],
[1, 1, 1, 1, 1]])
>>> np.tri(3, 5, -1)
array([[0., 0., 0., 0., 0.],
[1., 0., 0., 0., 0.],
[1., 1., 0., 0., 0.]])
"""
if M is None:
M = N
m = greater_equal.outer(arange(N, dtype=_min_int(0, N)),
arange(-k, M-k, dtype=_min_int(-k, M - k)))
# Avoid making a copy if the requested type is already bool
m = m.astype(dtype, copy=False)
return m
def _trilu_dispatcher(m, k=None):
return (m,)
@array_function_dispatch(_trilu_dispatcher)
def tril(m, k=0):
"""
Lower triangle of an array.
Return a copy of an array with elements above the `k`-th diagonal zeroed.
Parameters
----------
m : array_like, shape (M, N)
Input array.
k : int, optional
Diagonal above which to zero elements. `k = 0` (the default) is the
main diagonal, `k < 0` is below it and `k > 0` is above.
Returns
-------
tril : ndarray, shape (M, N)
Lower triangle of `m`, of same shape and data-type as `m`.
See Also
--------
triu : same thing, only for the upper triangle
Examples
--------
>>> np.tril([[1,2,3],[4,5,6],[7,8,9],[10,11,12]], -1)
array([[ 0, 0, 0],
[ 4, 0, 0],
[ 7, 8, 0],
[10, 11, 12]])
"""
m = asanyarray(m)
mask = tri(*m.shape[-2:], k=k, dtype=bool)
return where(mask, m, zeros(1, m.dtype))
@array_function_dispatch(_trilu_dispatcher)
def triu(m, k=0):
"""
Upper triangle of an array.
Return a copy of a matrix with the elements below the `k`-th diagonal
zeroed.
Please refer to the documentation for `tril` for further details.
See Also
--------
tril : lower triangle of an array
Examples
--------
>>> np.triu([[1,2,3],[4,5,6],[7,8,9],[10,11,12]], -1)
array([[ 1, 2, 3],
[ 4, 5, 6],
[ 0, 8, 9],
[ 0, 0, 12]])
"""
m = asanyarray(m)
mask = tri(*m.shape[-2:], k=k-1, dtype=bool)
return where(mask, zeros(1, m.dtype), m)
def _vander_dispatcher(x, N=None, increasing=None):
return (x,)
# Originally borrowed from John Hunter and matplotlib
@array_function_dispatch(_vander_dispatcher)
def vander(x, N=None, increasing=False):
"""
Generate a Vandermonde matrix.
The columns of the output matrix are powers of the input vector. The
order of the powers is determined by the `increasing` boolean argument.
Specifically, when `increasing` is False, the `i`-th output column is
the input vector raised element-wise to the power of ``N - i - 1``. Such
a matrix with a geometric progression in each row is named for Alexandre-
Theophile Vandermonde.
Parameters
----------
x : array_like
1-D input array.
N : int, optional
Number of columns in the output. If `N` is not specified, a square
array is returned (``N = len(x)``).
increasing : bool, optional
Order of the powers of the columns. If True, the powers increase
from left to right, if False (the default) they are reversed.
.. versionadded:: 1.9.0
Returns
-------
out : ndarray
Vandermonde matrix. If `increasing` is False, the first column is
``x^(N-1)``, the second ``x^(N-2)`` and so forth. If `increasing` is
True, the columns are ``x^0, x^1, ..., x^(N-1)``.
See Also
--------
polynomial.polynomial.polyvander
Examples
--------
>>> x = np.array([1, 2, 3, 5])
>>> N = 3
>>> np.vander(x, N)
array([[ 1, 1, 1],
[ 4, 2, 1],
[ 9, 3, 1],
[25, 5, 1]])
>>> np.column_stack([x**(N-1-i) for i in range(N)])
array([[ 1, 1, 1],
[ 4, 2, 1],
[ 9, 3, 1],
[25, 5, 1]])
>>> x = np.array([1, 2, 3, 5])
>>> np.vander(x)
array([[ 1, 1, 1, 1],
[ 8, 4, 2, 1],
[ 27, 9, 3, 1],
[125, 25, 5, 1]])
>>> np.vander(x, increasing=True)
array([[ 1, 1, 1, 1],
[ 1, 2, 4, 8],
[ 1, 3, 9, 27],
[ 1, 5, 25, 125]])
The determinant of a square Vandermonde matrix is the product
of the differences between the values of the input vector:
>>> np.linalg.det(np.vander(x))
48.000000000000043 # may vary
>>> (5-3)*(5-2)*(5-1)*(3-2)*(3-1)*(2-1)
48
"""
x = asarray(x)
if x.ndim != 1:
raise ValueError("x must be a one-dimensional array or sequence.")
if N is None:
N = len(x)
v = empty((len(x), N), dtype=promote_types(x.dtype, int))
tmp = v[:, ::-1] if not increasing else v
if N > 0:
tmp[:, 0] = 1
if N > 1:
tmp[:, 1:] = x[:, None]
multiply.accumulate(tmp[:, 1:], out=tmp[:, 1:], axis=1)
return v
def _histogram2d_dispatcher(x, y, bins=None, range=None, normed=None,
weights=None, density=None):
yield x
yield y
# This terrible logic is adapted from the checks in histogram2d
try:
N = len(bins)
except TypeError:
N = 1
if N == 2:
yield from bins # bins=[x, y]
else:
yield bins
yield weights
@array_function_dispatch(_histogram2d_dispatcher)
def histogram2d(x, y, bins=10, range=None, normed=None, weights=None,
density=None):
"""
Compute the bi-dimensional histogram of two data samples.
Parameters
----------
x : array_like, shape (N,)
An array containing the x coordinates of the points to be
histogrammed.
y : array_like, shape (N,)
An array containing the y coordinates of the points to be
histogrammed.
bins : int or array_like or [int, int] or [array, array], optional
The bin specification:
* If int, the number of bins for the two dimensions (nx=ny=bins).
* If array_like, the bin edges for the two dimensions
(x_edges=y_edges=bins).
* If [int, int], the number of bins in each dimension
(nx, ny = bins).
* If [array, array], the bin edges in each dimension
(x_edges, y_edges = bins).
* A combination [int, array] or [array, int], where int
is the number of bins and array is the bin edges.
range : array_like, shape(2,2), optional
The leftmost and rightmost edges of the bins along each dimension
(if not specified explicitly in the `bins` parameters):
``[[xmin, xmax], [ymin, ymax]]``. All values outside of this range
will be considered outliers and not tallied in the histogram.
density : bool, optional
If False, the default, returns the number of samples in each bin.
If True, returns the probability *density* function at the bin,
``bin_count / sample_count / bin_area``.
normed : bool, optional
An alias for the density argument that behaves identically. To avoid
confusion with the broken normed argument to `histogram`, `density`
should be preferred.
weights : array_like, shape(N,), optional
An array of values ``w_i`` weighing each sample ``(x_i, y_i)``.
Weights are normalized to 1 if `normed` is True. If `normed` is
False, the values of the returned histogram are equal to the sum of
the weights belonging to the samples falling into each bin.
Returns
-------
H : ndarray, shape(nx, ny)
The bi-dimensional histogram of samples `x` and `y`. Values in `x`
are histogrammed along the first dimension and values in `y` are
histogrammed along the second dimension.
xedges : ndarray, shape(nx+1,)
The bin edges along the first dimension.
yedges : ndarray, shape(ny+1,)
The bin edges along the second dimension.
See Also
--------
histogram : 1D histogram
histogramdd : Multidimensional histogram
Notes
-----
When `normed` is True, then the returned histogram is the sample
density, defined such that the sum over bins of the product
``bin_value * bin_area`` is 1.
Please note that the histogram does not follow the Cartesian convention
where `x` values are on the abscissa and `y` values on the ordinate
axis. Rather, `x` is histogrammed along the first dimension of the
array (vertical), and `y` along the second dimension of the array
(horizontal). This ensures compatibility with `histogramdd`.
Examples
--------
>>> from matplotlib.image import NonUniformImage
>>> import matplotlib.pyplot as plt
Construct a 2-D histogram with variable bin width. First define the bin
edges:
>>> xedges = [0, 1, 3, 5]
>>> yedges = [0, 2, 3, 4, 6]
Next we create a histogram H with random bin content:
>>> x = np.random.normal(2, 1, 100)
>>> y = np.random.normal(1, 1, 100)
>>> H, xedges, yedges = np.histogram2d(x, y, bins=(xedges, yedges))
>>> H = H.T # Let each row list bins with common y range.
:func:`imshow <matplotlib.pyplot.imshow>` can only display square bins:
>>> fig = plt.figure(figsize=(7, 3))
>>> ax = fig.add_subplot(131, title='imshow: square bins')
>>> plt.imshow(H, interpolation='nearest', origin='low',
... extent=[xedges[0], xedges[-1], yedges[0], yedges[-1]])
<matplotlib.image.AxesImage object at 0x...>
:func:`pcolormesh <matplotlib.pyplot.pcolormesh>` can display actual edges:
>>> ax = fig.add_subplot(132, title='pcolormesh: actual edges',
... aspect='equal')
>>> X, Y = np.meshgrid(xedges, yedges)
>>> ax.pcolormesh(X, Y, H)
<matplotlib.collections.QuadMesh object at 0x...>
:class:`NonUniformImage <matplotlib.image.NonUniformImage>` can be used to
display actual bin edges with interpolation:
>>> ax = fig.add_subplot(133, title='NonUniformImage: interpolated',
... aspect='equal', xlim=xedges[[0, -1]], ylim=yedges[[0, -1]])
>>> im = NonUniformImage(ax, interpolation='bilinear')
>>> xcenters = (xedges[:-1] + xedges[1:]) / 2
>>> ycenters = (yedges[:-1] + yedges[1:]) / 2
>>> im.set_data(xcenters, ycenters, H)
>>> ax.images.append(im)
>>> plt.show()
"""
from numpy import histogramdd
try:
N = len(bins)
except TypeError:
N = 1
if N != 1 and N != 2:
xedges = yedges = asarray(bins)
bins = [xedges, yedges]
hist, edges = histogramdd([x, y], bins, range, normed, weights, density)
return hist, edges[0], edges[1]
@set_module('numpy')
def mask_indices(n, mask_func, k=0):
"""
Return the indices to access (n, n) arrays, given a masking function.
Assume `mask_func` is a function that, for a square array a of size
``(n, n)`` with a possible offset argument `k`, when called as
``mask_func(a, k)`` returns a new array with zeros in certain locations
(functions like `triu` or `tril` do precisely this). Then this function
returns the indices where the non-zero values would be located.
Parameters
----------
n : int
The returned indices will be valid to access arrays of shape (n, n).
mask_func : callable
A function whose call signature is similar to that of `triu`, `tril`.
That is, ``mask_func(x, k)`` returns a boolean array, shaped like `x`.
`k` is an optional argument to the function.
k : scalar
An optional argument which is passed through to `mask_func`. Functions
like `triu`, `tril` take a second argument that is interpreted as an
offset.
Returns
-------
indices : tuple of arrays.
The `n` arrays of indices corresponding to the locations where
``mask_func(np.ones((n, n)), k)`` is True.
See Also
--------
triu, tril, triu_indices, tril_indices
Notes
-----
.. versionadded:: 1.4.0
Examples
--------
These are the indices that would allow you to access the upper triangular
part of any 3x3 array:
>>> iu = np.mask_indices(3, np.triu)
For example, if `a` is a 3x3 array:
>>> a = np.arange(9).reshape(3, 3)
>>> a
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
>>> a[iu]
array([0, 1, 2, 4, 5, 8])
An offset can be passed also to the masking function. This gets us the
indices starting on the first diagonal right of the main one:
>>> iu1 = np.mask_indices(3, np.triu, 1)
with which we now extract only three elements:
>>> a[iu1]
array([1, 2, 5])
"""
m = ones((n, n), int)
a = mask_func(m, k)
return nonzero(a != 0)
@set_module('numpy')
def tril_indices(n, k=0, m=None):
"""
Return the indices for the lower-triangle of an (n, m) array.
Parameters
----------
n : int
The row dimension of the arrays for which the returned
indices will be valid.
k : int, optional
Diagonal offset (see `tril` for details).
m : int, optional
.. versionadded:: 1.9.0
The column dimension of the arrays for which the returned
arrays will be valid.
By default `m` is taken equal to `n`.
Returns
-------
inds : tuple of arrays
The indices for the triangle. The returned tuple contains two arrays,
each with the indices along one dimension of the array.
See also
--------
triu_indices : similar function, for upper-triangular.
mask_indices : generic function accepting an arbitrary mask function.
tril, triu
Notes
-----
.. versionadded:: 1.4.0
Examples
--------
Compute two different sets of indices to access 4x4 arrays, one for the
lower triangular part starting at the main diagonal, and one starting two
diagonals further right:
>>> il1 = np.tril_indices(4)
>>> il2 = np.tril_indices(4, 2)
Here is how they can be used with a sample array:
>>> a = np.arange(16).reshape(4, 4)
>>> a
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15]])
Both for indexing:
>>> a[il1]
array([ 0, 4, 5, ..., 13, 14, 15])
And for assigning values:
>>> a[il1] = -1
>>> a
array([[-1, 1, 2, 3],
[-1, -1, 6, 7],
[-1, -1, -1, 11],
[-1, -1, -1, -1]])
These cover almost the whole array (two diagonals right of the main one):
>>> a[il2] = -10
>>> a
array([[-10, -10, -10, 3],
[-10, -10, -10, -10],
[-10, -10, -10, -10],
[-10, -10, -10, -10]])
"""
return nonzero(tri(n, m, k=k, dtype=bool))
def _trilu_indices_form_dispatcher(arr, k=None):
return (arr,)
@array_function_dispatch(_trilu_indices_form_dispatcher)
def tril_indices_from(arr, k=0):
"""
Return the indices for the lower-triangle of arr.
See `tril_indices` for full details.
Parameters
----------
arr : array_like
The indices will be valid for square arrays whose dimensions are
the same as arr.
k : int, optional
Diagonal offset (see `tril` for details).
See Also
--------
tril_indices, tril
Notes
-----
.. versionadded:: 1.4.0
"""
if arr.ndim != 2:
raise ValueError("input array must be 2-d")
return tril_indices(arr.shape[-2], k=k, m=arr.shape[-1])
@set_module('numpy')
def triu_indices(n, k=0, m=None):
"""
Return the indices for the upper-triangle of an (n, m) array.
Parameters
----------
n : int
The size of the arrays for which the returned indices will
be valid.
k : int, optional
Diagonal offset (see `triu` for details).
m : int, optional
.. versionadded:: 1.9.0
The column dimension of the arrays for which the returned
arrays will be valid.
By default `m` is taken equal to `n`.
Returns
-------
inds : tuple, shape(2) of ndarrays, shape(`n`)
The indices for the triangle. The returned tuple contains two arrays,
each with the indices along one dimension of the array. Can be used
to slice a ndarray of shape(`n`, `n`).
See also
--------
tril_indices : similar function, for lower-triangular.
mask_indices : generic function accepting an arbitrary mask function.
triu, tril
Notes
-----
.. versionadded:: 1.4.0
Examples
--------
Compute two different sets of indices to access 4x4 arrays, one for the
upper triangular part starting at the main diagonal, and one starting two
diagonals further right:
>>> iu1 = np.triu_indices(4)
>>> iu2 = np.triu_indices(4, 2)
Here is how they can be used with a sample array:
>>> a = np.arange(16).reshape(4, 4)
>>> a
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15]])
Both for indexing:
>>> a[iu1]
array([ 0, 1, 2, ..., 10, 11, 15])
And for assigning values:
>>> a[iu1] = -1
>>> a
array([[-1, -1, -1, -1],
[ 4, -1, -1, -1],
[ 8, 9, -1, -1],
[12, 13, 14, -1]])
These cover only a small part of the whole array (two diagonals right
of the main one):
>>> a[iu2] = -10
>>> a
array([[ -1, -1, -10, -10],
[ 4, -1, -1, -10],
[ 8, 9, -1, -1],
[ 12, 13, 14, -1]])
"""
return nonzero(~tri(n, m, k=k-1, dtype=bool))
@array_function_dispatch(_trilu_indices_form_dispatcher)
def triu_indices_from(arr, k=0):
"""
Return the indices for the upper-triangle of arr.
See `triu_indices` for full details.
Parameters
----------
arr : ndarray, shape(N, N)
The indices will be valid for square arrays.
k : int, optional
Diagonal offset (see `triu` for details).
Returns
-------
triu_indices_from : tuple, shape(2) of ndarray, shape(N)
Indices for the upper-triangle of `arr`.
See Also
--------
triu_indices, triu
Notes
-----
.. versionadded:: 1.4.0
"""
if arr.ndim != 2:
raise ValueError("input array must be 2-d")
return triu_indices(arr.shape[-2], k=k, m=arr.shape[-1])
| bsd-3-clause |
appapantula/scikit-learn | sklearn/feature_extraction/text.py | 110 | 50157 | # -*- coding: utf-8 -*-
# Authors: Olivier Grisel <[email protected]>
# Mathieu Blondel <[email protected]>
# Lars Buitinck <[email protected]>
# Robert Layton <[email protected]>
# Jochen Wersdörfer <[email protected]>
# Roman Sinayev <[email protected]>
#
# License: BSD 3 clause
"""
The :mod:`sklearn.feature_extraction.text` submodule gathers utilities to
build feature vectors from text documents.
"""
from __future__ import unicode_literals
import array
from collections import Mapping, defaultdict
import numbers
from operator import itemgetter
import re
import unicodedata
import numpy as np
import scipy.sparse as sp
from ..base import BaseEstimator, TransformerMixin
from ..externals import six
from ..externals.six.moves import xrange
from ..preprocessing import normalize
from .hashing import FeatureHasher
from .stop_words import ENGLISH_STOP_WORDS
from ..utils import deprecated
from ..utils.fixes import frombuffer_empty, bincount
from ..utils.validation import check_is_fitted
__all__ = ['CountVectorizer',
'ENGLISH_STOP_WORDS',
'TfidfTransformer',
'TfidfVectorizer',
'strip_accents_ascii',
'strip_accents_unicode',
'strip_tags']
def strip_accents_unicode(s):
"""Transform accentuated unicode symbols into their simple counterpart
Warning: the python-level loop and join operations make this
implementation 20 times slower than the strip_accents_ascii basic
normalization.
See also
--------
strip_accents_ascii
Remove accentuated char for any unicode symbol that has a direct
ASCII equivalent.
"""
return ''.join([c for c in unicodedata.normalize('NFKD', s)
if not unicodedata.combining(c)])
def strip_accents_ascii(s):
"""Transform accentuated unicode symbols into ascii or nothing
Warning: this solution is only suited for languages that have a direct
transliteration to ASCII symbols.
See also
--------
strip_accents_unicode
Remove accentuated char for any unicode symbol.
"""
nkfd_form = unicodedata.normalize('NFKD', s)
return nkfd_form.encode('ASCII', 'ignore').decode('ASCII')
def strip_tags(s):
"""Basic regexp based HTML / XML tag stripper function
For serious HTML/XML preprocessing you should rather use an external
library such as lxml or BeautifulSoup.
"""
return re.compile(r"<([^>]+)>", flags=re.UNICODE).sub(" ", s)
def _check_stop_list(stop):
if stop == "english":
return ENGLISH_STOP_WORDS
elif isinstance(stop, six.string_types):
raise ValueError("not a built-in stop list: %s" % stop)
elif stop is None:
return None
else: # assume it's a collection
return frozenset(stop)
class VectorizerMixin(object):
"""Provides common code for text vectorizers (tokenization logic)."""
_white_spaces = re.compile(r"\s\s+")
def decode(self, doc):
"""Decode the input into a string of unicode symbols
The decoding strategy depends on the vectorizer parameters.
"""
if self.input == 'filename':
with open(doc, 'rb') as fh:
doc = fh.read()
elif self.input == 'file':
doc = doc.read()
if isinstance(doc, bytes):
doc = doc.decode(self.encoding, self.decode_error)
if doc is np.nan:
raise ValueError("np.nan is an invalid document, expected byte or "
"unicode string.")
return doc
def _word_ngrams(self, tokens, stop_words=None):
"""Turn tokens into a sequence of n-grams after stop words filtering"""
# handle stop words
if stop_words is not None:
tokens = [w for w in tokens if w not in stop_words]
# handle token n-grams
min_n, max_n = self.ngram_range
if max_n != 1:
original_tokens = tokens
tokens = []
n_original_tokens = len(original_tokens)
for n in xrange(min_n,
min(max_n + 1, n_original_tokens + 1)):
for i in xrange(n_original_tokens - n + 1):
tokens.append(" ".join(original_tokens[i: i + n]))
return tokens
def _char_ngrams(self, text_document):
"""Tokenize text_document into a sequence of character n-grams"""
# normalize white spaces
text_document = self._white_spaces.sub(" ", text_document)
text_len = len(text_document)
ngrams = []
min_n, max_n = self.ngram_range
for n in xrange(min_n, min(max_n + 1, text_len + 1)):
for i in xrange(text_len - n + 1):
ngrams.append(text_document[i: i + n])
return ngrams
def _char_wb_ngrams(self, text_document):
"""Whitespace sensitive char-n-gram tokenization.
Tokenize text_document into a sequence of character n-grams
excluding any whitespace (operating only inside word boundaries)"""
# normalize white spaces
text_document = self._white_spaces.sub(" ", text_document)
min_n, max_n = self.ngram_range
ngrams = []
for w in text_document.split():
w = ' ' + w + ' '
w_len = len(w)
for n in xrange(min_n, max_n + 1):
offset = 0
ngrams.append(w[offset:offset + n])
while offset + n < w_len:
offset += 1
ngrams.append(w[offset:offset + n])
if offset == 0: # count a short word (w_len < n) only once
break
return ngrams
def build_preprocessor(self):
"""Return a function to preprocess the text before tokenization"""
if self.preprocessor is not None:
return self.preprocessor
# unfortunately python functools package does not have an efficient
# `compose` function that would have allowed us to chain a dynamic
# number of functions. However the cost of a lambda call is a few
# hundreds of nanoseconds which is negligible when compared to the
# cost of tokenizing a string of 1000 chars for instance.
noop = lambda x: x
# accent stripping
if not self.strip_accents:
strip_accents = noop
elif callable(self.strip_accents):
strip_accents = self.strip_accents
elif self.strip_accents == 'ascii':
strip_accents = strip_accents_ascii
elif self.strip_accents == 'unicode':
strip_accents = strip_accents_unicode
else:
raise ValueError('Invalid value for "strip_accents": %s' %
self.strip_accents)
if self.lowercase:
return lambda x: strip_accents(x.lower())
else:
return strip_accents
def build_tokenizer(self):
"""Return a function that splits a string into a sequence of tokens"""
if self.tokenizer is not None:
return self.tokenizer
token_pattern = re.compile(self.token_pattern)
return lambda doc: token_pattern.findall(doc)
def get_stop_words(self):
"""Build or fetch the effective stop words list"""
return _check_stop_list(self.stop_words)
def build_analyzer(self):
"""Return a callable that handles preprocessing and tokenization"""
if callable(self.analyzer):
return self.analyzer
preprocess = self.build_preprocessor()
if self.analyzer == 'char':
return lambda doc: self._char_ngrams(preprocess(self.decode(doc)))
elif self.analyzer == 'char_wb':
return lambda doc: self._char_wb_ngrams(
preprocess(self.decode(doc)))
elif self.analyzer == 'word':
stop_words = self.get_stop_words()
tokenize = self.build_tokenizer()
return lambda doc: self._word_ngrams(
tokenize(preprocess(self.decode(doc))), stop_words)
else:
raise ValueError('%s is not a valid tokenization scheme/analyzer' %
self.analyzer)
def _validate_vocabulary(self):
vocabulary = self.vocabulary
if vocabulary is not None:
if not isinstance(vocabulary, Mapping):
vocab = {}
for i, t in enumerate(vocabulary):
if vocab.setdefault(t, i) != i:
msg = "Duplicate term in vocabulary: %r" % t
raise ValueError(msg)
vocabulary = vocab
else:
indices = set(six.itervalues(vocabulary))
if len(indices) != len(vocabulary):
raise ValueError("Vocabulary contains repeated indices.")
for i in xrange(len(vocabulary)):
if i not in indices:
msg = ("Vocabulary of size %d doesn't contain index "
"%d." % (len(vocabulary), i))
raise ValueError(msg)
if not vocabulary:
raise ValueError("empty vocabulary passed to fit")
self.fixed_vocabulary_ = True
self.vocabulary_ = dict(vocabulary)
else:
self.fixed_vocabulary_ = False
def _check_vocabulary(self):
"""Check if vocabulary is empty or missing (not fit-ed)"""
msg = "%(name)s - Vocabulary wasn't fitted."
check_is_fitted(self, 'vocabulary_', msg=msg),
if len(self.vocabulary_) == 0:
raise ValueError("Vocabulary is empty")
@property
@deprecated("The `fixed_vocabulary` attribute is deprecated and will be "
"removed in 0.18. Please use `fixed_vocabulary_` instead.")
def fixed_vocabulary(self):
return self.fixed_vocabulary_
class HashingVectorizer(BaseEstimator, VectorizerMixin):
"""Convert a collection of text documents to a matrix of token occurrences
It turns a collection of text documents into a scipy.sparse matrix holding
token occurrence counts (or binary occurrence information), possibly
normalized as token frequencies if norm='l1' or projected on the euclidean
unit sphere if norm='l2'.
This text vectorizer implementation uses the hashing trick to find the
token string name to feature integer index mapping.
This strategy has several advantages:
- it is very low memory scalable to large datasets as there is no need to
store a vocabulary dictionary in memory
- it is fast to pickle and un-pickle as it holds no state besides the
constructor parameters
- it can be used in a streaming (partial fit) or parallel pipeline as there
is no state computed during fit.
There are also a couple of cons (vs using a CountVectorizer with an
in-memory vocabulary):
- there is no way to compute the inverse transform (from feature indices to
string feature names) which can be a problem when trying to introspect
which features are most important to a model.
- there can be collisions: distinct tokens can be mapped to the same
feature index. However in practice this is rarely an issue if n_features
is large enough (e.g. 2 ** 18 for text classification problems).
- no IDF weighting as this would render the transformer stateful.
The hash function employed is the signed 32-bit version of Murmurhash3.
Read more in the :ref:`User Guide <text_feature_extraction>`.
Parameters
----------
input : string {'filename', 'file', 'content'}
If 'filename', the sequence passed as an argument to fit is
expected to be a list of filenames that need reading to fetch
the raw content to analyze.
If 'file', the sequence items must have a 'read' method (file-like
object) that is called to fetch the bytes in memory.
Otherwise the input is expected to be the sequence strings or
bytes items are expected to be analyzed directly.
encoding : string, default='utf-8'
If bytes or files are given to analyze, this encoding is used to
decode.
decode_error : {'strict', 'ignore', 'replace'}
Instruction on what to do if a byte sequence is given to analyze that
contains characters not of the given `encoding`. By default, it is
'strict', meaning that a UnicodeDecodeError will be raised. Other
values are 'ignore' and 'replace'.
strip_accents : {'ascii', 'unicode', None}
Remove accents during the preprocessing step.
'ascii' is a fast method that only works on characters that have
an direct ASCII mapping.
'unicode' is a slightly slower method that works on any characters.
None (default) does nothing.
analyzer : string, {'word', 'char', 'char_wb'} or callable
Whether the feature should be made of word or character n-grams.
Option 'char_wb' creates character n-grams only from text inside
word boundaries.
If a callable is passed it is used to extract the sequence of features
out of the raw, unprocessed input.
preprocessor : callable or None (default)
Override the preprocessing (string transformation) stage while
preserving the tokenizing and n-grams generation steps.
tokenizer : callable or None (default)
Override the string tokenization step while preserving the
preprocessing and n-grams generation steps.
Only applies if ``analyzer == 'word'``.
ngram_range : tuple (min_n, max_n), default=(1, 1)
The lower and upper boundary of the range of n-values for different
n-grams to be extracted. All values of n such that min_n <= n <= max_n
will be used.
stop_words : string {'english'}, list, or None (default)
If 'english', a built-in stop word list for English is used.
If a list, that list is assumed to contain stop words, all of which
will be removed from the resulting tokens.
Only applies if ``analyzer == 'word'``.
lowercase : boolean, default=True
Convert all characters to lowercase before tokenizing.
token_pattern : string
Regular expression denoting what constitutes a "token", only used
if ``analyzer == 'word'``. The default regexp selects tokens of 2
or more alphanumeric characters (punctuation is completely ignored
and always treated as a token separator).
n_features : integer, default=(2 ** 20)
The number of features (columns) in the output matrices. Small numbers
of features are likely to cause hash collisions, but large numbers
will cause larger coefficient dimensions in linear learners.
norm : 'l1', 'l2' or None, optional
Norm used to normalize term vectors. None for no normalization.
binary: boolean, default=False.
If True, all non zero counts are set to 1. This is useful for discrete
probabilistic models that model binary events rather than integer
counts.
dtype: type, optional
Type of the matrix returned by fit_transform() or transform().
non_negative : boolean, default=False
Whether output matrices should contain non-negative values only;
effectively calls abs on the matrix prior to returning it.
When True, output values can be interpreted as frequencies.
When False, output values will have expected value zero.
See also
--------
CountVectorizer, TfidfVectorizer
"""
def __init__(self, input='content', encoding='utf-8',
decode_error='strict', strip_accents=None,
lowercase=True, preprocessor=None, tokenizer=None,
stop_words=None, token_pattern=r"(?u)\b\w\w+\b",
ngram_range=(1, 1), analyzer='word', n_features=(2 ** 20),
binary=False, norm='l2', non_negative=False,
dtype=np.float64):
self.input = input
self.encoding = encoding
self.decode_error = decode_error
self.strip_accents = strip_accents
self.preprocessor = preprocessor
self.tokenizer = tokenizer
self.analyzer = analyzer
self.lowercase = lowercase
self.token_pattern = token_pattern
self.stop_words = stop_words
self.n_features = n_features
self.ngram_range = ngram_range
self.binary = binary
self.norm = norm
self.non_negative = non_negative
self.dtype = dtype
def partial_fit(self, X, y=None):
"""Does nothing: this transformer is stateless.
This method is just there to mark the fact that this transformer
can work in a streaming setup.
"""
return self
def fit(self, X, y=None):
"""Does nothing: this transformer is stateless."""
# triggers a parameter validation
self._get_hasher().fit(X, y=y)
return self
def transform(self, X, y=None):
"""Transform a sequence of documents to a document-term matrix.
Parameters
----------
X : iterable over raw text documents, length = n_samples
Samples. Each sample must be a text document (either bytes or
unicode strings, file name or file object depending on the
constructor argument) which will be tokenized and hashed.
y : (ignored)
Returns
-------
X : scipy.sparse matrix, shape = (n_samples, self.n_features)
Document-term matrix.
"""
analyzer = self.build_analyzer()
X = self._get_hasher().transform(analyzer(doc) for doc in X)
if self.binary:
X.data.fill(1)
if self.norm is not None:
X = normalize(X, norm=self.norm, copy=False)
return X
# Alias transform to fit_transform for convenience
fit_transform = transform
def _get_hasher(self):
return FeatureHasher(n_features=self.n_features,
input_type='string', dtype=self.dtype,
non_negative=self.non_negative)
def _document_frequency(X):
"""Count the number of non-zero values for each feature in sparse X."""
if sp.isspmatrix_csr(X):
return bincount(X.indices, minlength=X.shape[1])
else:
return np.diff(sp.csc_matrix(X, copy=False).indptr)
class CountVectorizer(BaseEstimator, VectorizerMixin):
"""Convert a collection of text documents to a matrix of token counts
This implementation produces a sparse representation of the counts using
scipy.sparse.coo_matrix.
If you do not provide an a-priori dictionary and you do not use an analyzer
that does some kind of feature selection then the number of features will
be equal to the vocabulary size found by analyzing the data.
Read more in the :ref:`User Guide <text_feature_extraction>`.
Parameters
----------
input : string {'filename', 'file', 'content'}
If 'filename', the sequence passed as an argument to fit is
expected to be a list of filenames that need reading to fetch
the raw content to analyze.
If 'file', the sequence items must have a 'read' method (file-like
object) that is called to fetch the bytes in memory.
Otherwise the input is expected to be the sequence strings or
bytes items are expected to be analyzed directly.
encoding : string, 'utf-8' by default.
If bytes or files are given to analyze, this encoding is used to
decode.
decode_error : {'strict', 'ignore', 'replace'}
Instruction on what to do if a byte sequence is given to analyze that
contains characters not of the given `encoding`. By default, it is
'strict', meaning that a UnicodeDecodeError will be raised. Other
values are 'ignore' and 'replace'.
strip_accents : {'ascii', 'unicode', None}
Remove accents during the preprocessing step.
'ascii' is a fast method that only works on characters that have
an direct ASCII mapping.
'unicode' is a slightly slower method that works on any characters.
None (default) does nothing.
analyzer : string, {'word', 'char', 'char_wb'} or callable
Whether the feature should be made of word or character n-grams.
Option 'char_wb' creates character n-grams only from text inside
word boundaries.
If a callable is passed it is used to extract the sequence of features
out of the raw, unprocessed input.
Only applies if ``analyzer == 'word'``.
preprocessor : callable or None (default)
Override the preprocessing (string transformation) stage while
preserving the tokenizing and n-grams generation steps.
tokenizer : callable or None (default)
Override the string tokenization step while preserving the
preprocessing and n-grams generation steps.
Only applies if ``analyzer == 'word'``.
ngram_range : tuple (min_n, max_n)
The lower and upper boundary of the range of n-values for different
n-grams to be extracted. All values of n such that min_n <= n <= max_n
will be used.
stop_words : string {'english'}, list, or None (default)
If 'english', a built-in stop word list for English is used.
If a list, that list is assumed to contain stop words, all of which
will be removed from the resulting tokens.
Only applies if ``analyzer == 'word'``.
If None, no stop words will be used. max_df can be set to a value
in the range [0.7, 1.0) to automatically detect and filter stop
words based on intra corpus document frequency of terms.
lowercase : boolean, True by default
Convert all characters to lowercase before tokenizing.
token_pattern : string
Regular expression denoting what constitutes a "token", only used
if ``analyzer == 'word'``. The default regexp select tokens of 2
or more alphanumeric characters (punctuation is completely ignored
and always treated as a token separator).
max_df : float in range [0.0, 1.0] or int, default=1.0
When building the vocabulary ignore terms that have a document
frequency strictly higher than the given threshold (corpus-specific
stop words).
If float, the parameter represents a proportion of documents, integer
absolute counts.
This parameter is ignored if vocabulary is not None.
min_df : float in range [0.0, 1.0] or int, default=1
When building the vocabulary ignore terms that have a document
frequency strictly lower than the given threshold. This value is also
called cut-off in the literature.
If float, the parameter represents a proportion of documents, integer
absolute counts.
This parameter is ignored if vocabulary is not None.
max_features : int or None, default=None
If not None, build a vocabulary that only consider the top
max_features ordered by term frequency across the corpus.
This parameter is ignored if vocabulary is not None.
vocabulary : Mapping or iterable, optional
Either a Mapping (e.g., a dict) where keys are terms and values are
indices in the feature matrix, or an iterable over terms. If not
given, a vocabulary is determined from the input documents. Indices
in the mapping should not be repeated and should not have any gap
between 0 and the largest index.
binary : boolean, default=False
If True, all non zero counts are set to 1. This is useful for discrete
probabilistic models that model binary events rather than integer
counts.
dtype : type, optional
Type of the matrix returned by fit_transform() or transform().
Attributes
----------
vocabulary_ : dict
A mapping of terms to feature indices.
stop_words_ : set
Terms that were ignored because they either:
- occurred in too many documents (`max_df`)
- occurred in too few documents (`min_df`)
- were cut off by feature selection (`max_features`).
This is only available if no vocabulary was given.
See also
--------
HashingVectorizer, TfidfVectorizer
Notes
-----
The ``stop_words_`` attribute can get large and increase the model size
when pickling. This attribute is provided only for introspection and can
be safely removed using delattr or set to None before pickling.
"""
def __init__(self, input='content', encoding='utf-8',
decode_error='strict', strip_accents=None,
lowercase=True, preprocessor=None, tokenizer=None,
stop_words=None, token_pattern=r"(?u)\b\w\w+\b",
ngram_range=(1, 1), analyzer='word',
max_df=1.0, min_df=1, max_features=None,
vocabulary=None, binary=False, dtype=np.int64):
self.input = input
self.encoding = encoding
self.decode_error = decode_error
self.strip_accents = strip_accents
self.preprocessor = preprocessor
self.tokenizer = tokenizer
self.analyzer = analyzer
self.lowercase = lowercase
self.token_pattern = token_pattern
self.stop_words = stop_words
self.max_df = max_df
self.min_df = min_df
if max_df < 0 or min_df < 0:
raise ValueError("negative value for max_df of min_df")
self.max_features = max_features
if max_features is not None:
if (not isinstance(max_features, numbers.Integral) or
max_features <= 0):
raise ValueError(
"max_features=%r, neither a positive integer nor None"
% max_features)
self.ngram_range = ngram_range
self.vocabulary = vocabulary
self.binary = binary
self.dtype = dtype
def _sort_features(self, X, vocabulary):
"""Sort features by name
Returns a reordered matrix and modifies the vocabulary in place
"""
sorted_features = sorted(six.iteritems(vocabulary))
map_index = np.empty(len(sorted_features), dtype=np.int32)
for new_val, (term, old_val) in enumerate(sorted_features):
map_index[new_val] = old_val
vocabulary[term] = new_val
return X[:, map_index]
def _limit_features(self, X, vocabulary, high=None, low=None,
limit=None):
"""Remove too rare or too common features.
Prune features that are non zero in more samples than high or less
documents than low, modifying the vocabulary, and restricting it to
at most the limit most frequent.
This does not prune samples with zero features.
"""
if high is None and low is None and limit is None:
return X, set()
# Calculate a mask based on document frequencies
dfs = _document_frequency(X)
tfs = np.asarray(X.sum(axis=0)).ravel()
mask = np.ones(len(dfs), dtype=bool)
if high is not None:
mask &= dfs <= high
if low is not None:
mask &= dfs >= low
if limit is not None and mask.sum() > limit:
mask_inds = (-tfs[mask]).argsort()[:limit]
new_mask = np.zeros(len(dfs), dtype=bool)
new_mask[np.where(mask)[0][mask_inds]] = True
mask = new_mask
new_indices = np.cumsum(mask) - 1 # maps old indices to new
removed_terms = set()
for term, old_index in list(six.iteritems(vocabulary)):
if mask[old_index]:
vocabulary[term] = new_indices[old_index]
else:
del vocabulary[term]
removed_terms.add(term)
kept_indices = np.where(mask)[0]
if len(kept_indices) == 0:
raise ValueError("After pruning, no terms remain. Try a lower"
" min_df or a higher max_df.")
return X[:, kept_indices], removed_terms
def _count_vocab(self, raw_documents, fixed_vocab):
"""Create sparse feature matrix, and vocabulary where fixed_vocab=False
"""
if fixed_vocab:
vocabulary = self.vocabulary_
else:
# Add a new value when a new vocabulary item is seen
vocabulary = defaultdict()
vocabulary.default_factory = vocabulary.__len__
analyze = self.build_analyzer()
j_indices = _make_int_array()
indptr = _make_int_array()
indptr.append(0)
for doc in raw_documents:
for feature in analyze(doc):
try:
j_indices.append(vocabulary[feature])
except KeyError:
# Ignore out-of-vocabulary items for fixed_vocab=True
continue
indptr.append(len(j_indices))
if not fixed_vocab:
# disable defaultdict behaviour
vocabulary = dict(vocabulary)
if not vocabulary:
raise ValueError("empty vocabulary; perhaps the documents only"
" contain stop words")
j_indices = frombuffer_empty(j_indices, dtype=np.intc)
indptr = np.frombuffer(indptr, dtype=np.intc)
values = np.ones(len(j_indices))
X = sp.csr_matrix((values, j_indices, indptr),
shape=(len(indptr) - 1, len(vocabulary)),
dtype=self.dtype)
X.sum_duplicates()
return vocabulary, X
def fit(self, raw_documents, y=None):
"""Learn a vocabulary dictionary of all tokens in the raw documents.
Parameters
----------
raw_documents : iterable
An iterable which yields either str, unicode or file objects.
Returns
-------
self
"""
self.fit_transform(raw_documents)
return self
def fit_transform(self, raw_documents, y=None):
"""Learn the vocabulary dictionary and return term-document matrix.
This is equivalent to fit followed by transform, but more efficiently
implemented.
Parameters
----------
raw_documents : iterable
An iterable which yields either str, unicode or file objects.
Returns
-------
X : array, [n_samples, n_features]
Document-term matrix.
"""
# We intentionally don't call the transform method to make
# fit_transform overridable without unwanted side effects in
# TfidfVectorizer.
self._validate_vocabulary()
max_df = self.max_df
min_df = self.min_df
max_features = self.max_features
vocabulary, X = self._count_vocab(raw_documents,
self.fixed_vocabulary_)
if self.binary:
X.data.fill(1)
if not self.fixed_vocabulary_:
X = self._sort_features(X, vocabulary)
n_doc = X.shape[0]
max_doc_count = (max_df
if isinstance(max_df, numbers.Integral)
else max_df * n_doc)
min_doc_count = (min_df
if isinstance(min_df, numbers.Integral)
else min_df * n_doc)
if max_doc_count < min_doc_count:
raise ValueError(
"max_df corresponds to < documents than min_df")
X, self.stop_words_ = self._limit_features(X, vocabulary,
max_doc_count,
min_doc_count,
max_features)
self.vocabulary_ = vocabulary
return X
def transform(self, raw_documents):
"""Transform documents to document-term matrix.
Extract token counts out of raw text documents using the vocabulary
fitted with fit or the one provided to the constructor.
Parameters
----------
raw_documents : iterable
An iterable which yields either str, unicode or file objects.
Returns
-------
X : sparse matrix, [n_samples, n_features]
Document-term matrix.
"""
if not hasattr(self, 'vocabulary_'):
self._validate_vocabulary()
self._check_vocabulary()
# use the same matrix-building strategy as fit_transform
_, X = self._count_vocab(raw_documents, fixed_vocab=True)
if self.binary:
X.data.fill(1)
return X
def inverse_transform(self, X):
"""Return terms per document with nonzero entries in X.
Parameters
----------
X : {array, sparse matrix}, shape = [n_samples, n_features]
Returns
-------
X_inv : list of arrays, len = n_samples
List of arrays of terms.
"""
self._check_vocabulary()
if sp.issparse(X):
# We need CSR format for fast row manipulations.
X = X.tocsr()
else:
# We need to convert X to a matrix, so that the indexing
# returns 2D objects
X = np.asmatrix(X)
n_samples = X.shape[0]
terms = np.array(list(self.vocabulary_.keys()))
indices = np.array(list(self.vocabulary_.values()))
inverse_vocabulary = terms[np.argsort(indices)]
return [inverse_vocabulary[X[i, :].nonzero()[1]].ravel()
for i in range(n_samples)]
def get_feature_names(self):
"""Array mapping from feature integer indices to feature name"""
self._check_vocabulary()
return [t for t, i in sorted(six.iteritems(self.vocabulary_),
key=itemgetter(1))]
def _make_int_array():
"""Construct an array.array of a type suitable for scipy.sparse indices."""
return array.array(str("i"))
class TfidfTransformer(BaseEstimator, TransformerMixin):
"""Transform a count matrix to a normalized tf or tf-idf representation
Tf means term-frequency while tf-idf means term-frequency times inverse
document-frequency. This is a common term weighting scheme in information
retrieval, that has also found good use in document classification.
The goal of using tf-idf instead of the raw frequencies of occurrence of a
token in a given document is to scale down the impact of tokens that occur
very frequently in a given corpus and that are hence empirically less
informative than features that occur in a small fraction of the training
corpus.
The actual formula used for tf-idf is tf * (idf + 1) = tf + tf * idf,
instead of tf * idf. The effect of this is that terms with zero idf, i.e.
that occur in all documents of a training set, will not be entirely
ignored. The formulas used to compute tf and idf depend on parameter
settings that correspond to the SMART notation used in IR, as follows:
Tf is "n" (natural) by default, "l" (logarithmic) when sublinear_tf=True.
Idf is "t" when use_idf is given, "n" (none) otherwise.
Normalization is "c" (cosine) when norm='l2', "n" (none) when norm=None.
Read more in the :ref:`User Guide <text_feature_extraction>`.
Parameters
----------
norm : 'l1', 'l2' or None, optional
Norm used to normalize term vectors. None for no normalization.
use_idf : boolean, default=True
Enable inverse-document-frequency reweighting.
smooth_idf : boolean, default=True
Smooth idf weights by adding one to document frequencies, as if an
extra document was seen containing every term in the collection
exactly once. Prevents zero divisions.
sublinear_tf : boolean, default=False
Apply sublinear tf scaling, i.e. replace tf with 1 + log(tf).
References
----------
.. [Yates2011] `R. Baeza-Yates and B. Ribeiro-Neto (2011). Modern
Information Retrieval. Addison Wesley, pp. 68-74.`
.. [MRS2008] `C.D. Manning, P. Raghavan and H. Schuetze (2008).
Introduction to Information Retrieval. Cambridge University
Press, pp. 118-120.`
"""
def __init__(self, norm='l2', use_idf=True, smooth_idf=True,
sublinear_tf=False):
self.norm = norm
self.use_idf = use_idf
self.smooth_idf = smooth_idf
self.sublinear_tf = sublinear_tf
def fit(self, X, y=None):
"""Learn the idf vector (global term weights)
Parameters
----------
X : sparse matrix, [n_samples, n_features]
a matrix of term/token counts
"""
if not sp.issparse(X):
X = sp.csc_matrix(X)
if self.use_idf:
n_samples, n_features = X.shape
df = _document_frequency(X)
# perform idf smoothing if required
df += int(self.smooth_idf)
n_samples += int(self.smooth_idf)
# log+1 instead of log makes sure terms with zero idf don't get
# suppressed entirely.
idf = np.log(float(n_samples) / df) + 1.0
self._idf_diag = sp.spdiags(idf,
diags=0, m=n_features, n=n_features)
return self
def transform(self, X, copy=True):
"""Transform a count matrix to a tf or tf-idf representation
Parameters
----------
X : sparse matrix, [n_samples, n_features]
a matrix of term/token counts
copy : boolean, default True
Whether to copy X and operate on the copy or perform in-place
operations.
Returns
-------
vectors : sparse matrix, [n_samples, n_features]
"""
if hasattr(X, 'dtype') and np.issubdtype(X.dtype, np.float):
# preserve float family dtype
X = sp.csr_matrix(X, copy=copy)
else:
# convert counts or binary occurrences to floats
X = sp.csr_matrix(X, dtype=np.float64, copy=copy)
n_samples, n_features = X.shape
if self.sublinear_tf:
np.log(X.data, X.data)
X.data += 1
if self.use_idf:
check_is_fitted(self, '_idf_diag', 'idf vector is not fitted')
expected_n_features = self._idf_diag.shape[0]
if n_features != expected_n_features:
raise ValueError("Input has n_features=%d while the model"
" has been trained with n_features=%d" % (
n_features, expected_n_features))
# *= doesn't work
X = X * self._idf_diag
if self.norm:
X = normalize(X, norm=self.norm, copy=False)
return X
@property
def idf_(self):
if hasattr(self, "_idf_diag"):
return np.ravel(self._idf_diag.sum(axis=0))
else:
return None
class TfidfVectorizer(CountVectorizer):
"""Convert a collection of raw documents to a matrix of TF-IDF features.
Equivalent to CountVectorizer followed by TfidfTransformer.
Read more in the :ref:`User Guide <text_feature_extraction>`.
Parameters
----------
input : string {'filename', 'file', 'content'}
If 'filename', the sequence passed as an argument to fit is
expected to be a list of filenames that need reading to fetch
the raw content to analyze.
If 'file', the sequence items must have a 'read' method (file-like
object) that is called to fetch the bytes in memory.
Otherwise the input is expected to be the sequence strings or
bytes items are expected to be analyzed directly.
encoding : string, 'utf-8' by default.
If bytes or files are given to analyze, this encoding is used to
decode.
decode_error : {'strict', 'ignore', 'replace'}
Instruction on what to do if a byte sequence is given to analyze that
contains characters not of the given `encoding`. By default, it is
'strict', meaning that a UnicodeDecodeError will be raised. Other
values are 'ignore' and 'replace'.
strip_accents : {'ascii', 'unicode', None}
Remove accents during the preprocessing step.
'ascii' is a fast method that only works on characters that have
an direct ASCII mapping.
'unicode' is a slightly slower method that works on any characters.
None (default) does nothing.
analyzer : string, {'word', 'char'} or callable
Whether the feature should be made of word or character n-grams.
If a callable is passed it is used to extract the sequence of features
out of the raw, unprocessed input.
preprocessor : callable or None (default)
Override the preprocessing (string transformation) stage while
preserving the tokenizing and n-grams generation steps.
tokenizer : callable or None (default)
Override the string tokenization step while preserving the
preprocessing and n-grams generation steps.
Only applies if ``analyzer == 'word'``.
ngram_range : tuple (min_n, max_n)
The lower and upper boundary of the range of n-values for different
n-grams to be extracted. All values of n such that min_n <= n <= max_n
will be used.
stop_words : string {'english'}, list, or None (default)
If a string, it is passed to _check_stop_list and the appropriate stop
list is returned. 'english' is currently the only supported string
value.
If a list, that list is assumed to contain stop words, all of which
will be removed from the resulting tokens.
Only applies if ``analyzer == 'word'``.
If None, no stop words will be used. max_df can be set to a value
in the range [0.7, 1.0) to automatically detect and filter stop
words based on intra corpus document frequency of terms.
lowercase : boolean, default True
Convert all characters to lowercase before tokenizing.
token_pattern : string
Regular expression denoting what constitutes a "token", only used
if ``analyzer == 'word'``. The default regexp selects tokens of 2
or more alphanumeric characters (punctuation is completely ignored
and always treated as a token separator).
max_df : float in range [0.0, 1.0] or int, default=1.0
When building the vocabulary ignore terms that have a document
frequency strictly higher than the given threshold (corpus-specific
stop words).
If float, the parameter represents a proportion of documents, integer
absolute counts.
This parameter is ignored if vocabulary is not None.
min_df : float in range [0.0, 1.0] or int, default=1
When building the vocabulary ignore terms that have a document
frequency strictly lower than the given threshold. This value is also
called cut-off in the literature.
If float, the parameter represents a proportion of documents, integer
absolute counts.
This parameter is ignored if vocabulary is not None.
max_features : int or None, default=None
If not None, build a vocabulary that only consider the top
max_features ordered by term frequency across the corpus.
This parameter is ignored if vocabulary is not None.
vocabulary : Mapping or iterable, optional
Either a Mapping (e.g., a dict) where keys are terms and values are
indices in the feature matrix, or an iterable over terms. If not
given, a vocabulary is determined from the input documents.
binary : boolean, default=False
If True, all non-zero term counts are set to 1. This does not mean
outputs will have only 0/1 values, only that the tf term in tf-idf
is binary. (Set idf and normalization to False to get 0/1 outputs.)
dtype : type, optional
Type of the matrix returned by fit_transform() or transform().
norm : 'l1', 'l2' or None, optional
Norm used to normalize term vectors. None for no normalization.
use_idf : boolean, default=True
Enable inverse-document-frequency reweighting.
smooth_idf : boolean, default=True
Smooth idf weights by adding one to document frequencies, as if an
extra document was seen containing every term in the collection
exactly once. Prevents zero divisions.
sublinear_tf : boolean, default=False
Apply sublinear tf scaling, i.e. replace tf with 1 + log(tf).
Attributes
----------
idf_ : array, shape = [n_features], or None
The learned idf vector (global term weights)
when ``use_idf`` is set to True, None otherwise.
stop_words_ : set
Terms that were ignored because they either:
- occurred in too many documents (`max_df`)
- occurred in too few documents (`min_df`)
- were cut off by feature selection (`max_features`).
This is only available if no vocabulary was given.
See also
--------
CountVectorizer
Tokenize the documents and count the occurrences of token and return
them as a sparse matrix
TfidfTransformer
Apply Term Frequency Inverse Document Frequency normalization to a
sparse matrix of occurrence counts.
Notes
-----
The ``stop_words_`` attribute can get large and increase the model size
when pickling. This attribute is provided only for introspection and can
be safely removed using delattr or set to None before pickling.
"""
def __init__(self, input='content', encoding='utf-8',
decode_error='strict', strip_accents=None, lowercase=True,
preprocessor=None, tokenizer=None, analyzer='word',
stop_words=None, token_pattern=r"(?u)\b\w\w+\b",
ngram_range=(1, 1), max_df=1.0, min_df=1,
max_features=None, vocabulary=None, binary=False,
dtype=np.int64, norm='l2', use_idf=True, smooth_idf=True,
sublinear_tf=False):
super(TfidfVectorizer, self).__init__(
input=input, encoding=encoding, decode_error=decode_error,
strip_accents=strip_accents, lowercase=lowercase,
preprocessor=preprocessor, tokenizer=tokenizer, analyzer=analyzer,
stop_words=stop_words, token_pattern=token_pattern,
ngram_range=ngram_range, max_df=max_df, min_df=min_df,
max_features=max_features, vocabulary=vocabulary, binary=binary,
dtype=dtype)
self._tfidf = TfidfTransformer(norm=norm, use_idf=use_idf,
smooth_idf=smooth_idf,
sublinear_tf=sublinear_tf)
# Broadcast the TF-IDF parameters to the underlying transformer instance
# for easy grid search and repr
@property
def norm(self):
return self._tfidf.norm
@norm.setter
def norm(self, value):
self._tfidf.norm = value
@property
def use_idf(self):
return self._tfidf.use_idf
@use_idf.setter
def use_idf(self, value):
self._tfidf.use_idf = value
@property
def smooth_idf(self):
return self._tfidf.smooth_idf
@smooth_idf.setter
def smooth_idf(self, value):
self._tfidf.smooth_idf = value
@property
def sublinear_tf(self):
return self._tfidf.sublinear_tf
@sublinear_tf.setter
def sublinear_tf(self, value):
self._tfidf.sublinear_tf = value
@property
def idf_(self):
return self._tfidf.idf_
def fit(self, raw_documents, y=None):
"""Learn vocabulary and idf from training set.
Parameters
----------
raw_documents : iterable
an iterable which yields either str, unicode or file objects
Returns
-------
self : TfidfVectorizer
"""
X = super(TfidfVectorizer, self).fit_transform(raw_documents)
self._tfidf.fit(X)
return self
def fit_transform(self, raw_documents, y=None):
"""Learn vocabulary and idf, return term-document matrix.
This is equivalent to fit followed by transform, but more efficiently
implemented.
Parameters
----------
raw_documents : iterable
an iterable which yields either str, unicode or file objects
Returns
-------
X : sparse matrix, [n_samples, n_features]
Tf-idf-weighted document-term matrix.
"""
X = super(TfidfVectorizer, self).fit_transform(raw_documents)
self._tfidf.fit(X)
# X is already a transformed view of raw_documents so
# we set copy to False
return self._tfidf.transform(X, copy=False)
def transform(self, raw_documents, copy=True):
"""Transform documents to document-term matrix.
Uses the vocabulary and document frequencies (df) learned by fit (or
fit_transform).
Parameters
----------
raw_documents : iterable
an iterable which yields either str, unicode or file objects
copy : boolean, default True
Whether to copy X and operate on the copy or perform in-place
operations.
Returns
-------
X : sparse matrix, [n_samples, n_features]
Tf-idf-weighted document-term matrix.
"""
check_is_fitted(self, '_tfidf', 'The tfidf vector is not fitted')
X = super(TfidfVectorizer, self).transform(raw_documents)
return self._tfidf.transform(X, copy=False)
| bsd-3-clause |
liangz0707/scikit-learn | sklearn/utils/metaestimators.py | 283 | 2353 | """Utilities for meta-estimators"""
# Author: Joel Nothman
# Andreas Mueller
# Licence: BSD
from operator import attrgetter
from functools import update_wrapper
__all__ = ['if_delegate_has_method']
class _IffHasAttrDescriptor(object):
"""Implements a conditional property using the descriptor protocol.
Using this class to create a decorator will raise an ``AttributeError``
if the ``attribute_name`` is not present on the base object.
This allows ducktyping of the decorated method based on ``attribute_name``.
See https://docs.python.org/3/howto/descriptor.html for an explanation of
descriptors.
"""
def __init__(self, fn, attribute_name):
self.fn = fn
self.get_attribute = attrgetter(attribute_name)
# update the docstring of the descriptor
update_wrapper(self, fn)
def __get__(self, obj, type=None):
# raise an AttributeError if the attribute is not present on the object
if obj is not None:
# delegate only on instances, not the classes.
# this is to allow access to the docstrings.
self.get_attribute(obj)
# lambda, but not partial, allows help() to work with update_wrapper
out = lambda *args, **kwargs: self.fn(obj, *args, **kwargs)
# update the docstring of the returned function
update_wrapper(out, self.fn)
return out
def if_delegate_has_method(delegate):
"""Create a decorator for methods that are delegated to a sub-estimator
This enables ducktyping by hasattr returning True according to the
sub-estimator.
>>> from sklearn.utils.metaestimators import if_delegate_has_method
>>>
>>>
>>> class MetaEst(object):
... def __init__(self, sub_est):
... self.sub_est = sub_est
...
... @if_delegate_has_method(delegate='sub_est')
... def predict(self, X):
... return self.sub_est.predict(X)
...
>>> class HasPredict(object):
... def predict(self, X):
... return X.sum(axis=1)
...
>>> class HasNoPredict(object):
... pass
...
>>> hasattr(MetaEst(HasPredict()), 'predict')
True
>>> hasattr(MetaEst(HasNoPredict()), 'predict')
False
"""
return lambda fn: _IffHasAttrDescriptor(fn, '%s.%s' % (delegate, fn.__name__))
| bsd-3-clause |
djgagne/scikit-learn | examples/model_selection/plot_confusion_matrix.py | 244 | 2496 | """
================
Confusion matrix
================
Example of confusion matrix usage to evaluate the quality
of the output of a classifier on the iris data set. The
diagonal elements represent the number of points for which
the predicted label is equal to the true label, while
off-diagonal elements are those that are mislabeled by the
classifier. The higher the diagonal values of the confusion
matrix the better, indicating many correct predictions.
The figures show the confusion matrix with and without
normalization by class support size (number of elements
in each class). This kind of normalization can be
interesting in case of class imbalance to have a more
visual interpretation of which class is being misclassified.
Here the results are not as good as they could be as our
choice for the regularization parameter C was not the best.
In real life applications this parameter is usually chosen
using :ref:`grid_search`.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
from sklearn.cross_validation import train_test_split
from sklearn.metrics import confusion_matrix
# import some data to play with
iris = datasets.load_iris()
X = iris.data
y = iris.target
# Split the data into a training set and a test set
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# Run classifier, using a model that is too regularized (C too low) to see
# the impact on the results
classifier = svm.SVC(kernel='linear', C=0.01)
y_pred = classifier.fit(X_train, y_train).predict(X_test)
def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(iris.target_names))
plt.xticks(tick_marks, iris.target_names, rotation=45)
plt.yticks(tick_marks, iris.target_names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Compute confusion matrix
cm = confusion_matrix(y_test, y_pred)
np.set_printoptions(precision=2)
print('Confusion matrix, without normalization')
print(cm)
plt.figure()
plot_confusion_matrix(cm)
# Normalize the confusion matrix by row (i.e by the number of samples
# in each class)
cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print('Normalized confusion matrix')
print(cm_normalized)
plt.figure()
plot_confusion_matrix(cm_normalized, title='Normalized confusion matrix')
plt.show()
| bsd-3-clause |
dsokoler/StockAnalytics | Stock.py | 1 | 37881 | import sys, configparser, json
from datetime import datetime
importError = False;
try:
import requests
except ImportError:
print("Please install requests: 'pip3 install requests'");
importError = True;
#MatPlotLib for visualizations
try:
import matplotlib.pyplot as plt;
import matplotlib.ticker as mticker
from matplotlib.finance import candlestick_ohlc
import matplotlib.dates as mdates
except ImportError:
print("Please install matplotlib: 'pip3 install matplotlib'");
importError = True;
#Pandas is for datetimes
try:
import pandas as pd
except ImportError:
print("Please install Pandas: 'pip3 install pandas'");
importError = True;
#Numpy for line of best fit
try:
import numpy as np
except ImportError:
print("Please install Numpy: 'pip3 install numpy'");
importError = True;
try:
from Database import Database;
except ImportError:
print("Unable to find the Database module");
importError = True;
if (importError):
sys.exit();
class Stock:
"""
The object where we store all of our stock information
Process:
-Initialize stock object
-Retrieve stock information (retrieve methods)
-Perform calculations (calculate methods)
-PROFIT (hopefully)
For Calculate Methods: KeyError means there is NO data for that day, something being
none indicates that there was PARTIAL data for that day
"""
#Retrieve information needed to access Intrinio API
config = configparser.ConfigParser();
config.read("config.ini");
username = config["INTRINIO"]["username"];
password = config["INTRINIO"]["password"];
periods = [12, 26, 50, 200];
kamaPeriod = 7;
#Methods must be in the form 'calculateCALC'. When we add a new calculation
# method, add it's abbreviation and whether or not it utilizes a period here
methods = {
# Name : Uses Period
'SMA': True,
'TMA': True,
'SlopeMA': True,
'EMA': True,
'KAMA': True,
'Stochastics': False,
'AD': False,
'Aroon': True,
'OBV': False
}
def __init__(self, ticker, startDate, endDate, limit, period, database, outlier=1):
"""
Ticker: the abbreviation of this stock
startDate: the date on which to start all our analysis
endDate: the date on which to end all our analysis
limit: the PE*PBV ratio limit
period: 1 or more periods to calculate our MAs on
"""
if (type(period) == int and period not in Stock.periods):
Stock.periods.append(period);
elif(type(period) == list):
for p in period:
if (p not in Stock.periods):
Stock.periods.append(p);
self.ticker = ticker; #Ticker of this stock
self.information = None; #Information on the stock
self.ratio = None; #The ratio indicator with all data
self.ratioWithoutOutliers = None; #The ratio indicator sans outliers
self.decision = None; #Is this stock a good or bad pick
if (startDate is None):
self.startDate = pd.to_datetime('1800-01-01');
else:
self.startDate = startDate; #The start of the timeframe for our analysis
if (endDate is None):
self.endDate = pd.to_datetime('today');
else:
self.endDate = endDate; #The end of the timeframe for our analysis
self.limit = limit; #The PE*PBV limit (less means a better stock, but is harder to find)
self.outlier = outlier; #Number of standard deviations to be considered an outlier
self.database = Database(database);
self.database.updateStockInformation(ticker);
#self.makeDecisionInTimeframe(startDate, endDate, outlier);
self.earliestDate, self.latestDate, data = self.database.retrieveAllInformationForStock(ticker);
#FIND A WAY TO AUTOMATE THIS BASED ON DATABASE COLUMNS
#Dictionary is for easy analysis at certain time periods, list is for easy plotting over time periods
self.opens = data['Dict']['Opens']; #Holds date to open price
self.openList = data['List']['Opens']; #(Date, open)
self.closes = data['Dict']['Closes']; #Holds date to close price
self.closeList = data['List']['Closes']; #(Date, close)
self.highs = data['Dict']['Highs']; #Holds date to high price
self.highList = data['List']['Highs']; #(Date, high)
self.lows = data['Dict']['Lows']; #Holds date to low price
self.lowList = data['List']['Lows']; #(Date, low)
self.volumes = data['Dict']['Volumes']; #Holds volumes for the stock
self.volumeList = data['List']['Volumes']; #(Date, volume)
self.pe = data['Dict']['PE']; #Holds Price/Earnings ratios for the stock
self.peList = data['List']['PE']; #(Date, P/E)
self.pbv = data['Dict']['PBV']; #Holds Price/Book Value ratios for the stock
self.pbvList = data['List']['PBV'] #(Date, P/BV)
#SMA, EMA, KAMA, SlopeMA
self.calcs = {}; #maName: {maPeriod: {date: value}}
self.calcsList = {}; #maName: {maPeriod: [date, value]}
#Force 'SMA' to be first calculation, as we need it for others
methodKeyList = list(Stock.methods.keys());
methodKeyList.insert(0, methodKeyList.pop(methodKeyList.index('SMA')))
#Do all our calculations
for method in methodKeyList:
funcToCall = getattr(Stock, 'calculate' + method);
#If 'funcToCall' uses a period
periodsToUse = Stock.periods;
if (Stock.methods[method]):
#Avoid KeyError
if (method not in self.calcs.keys()):
self.calcs[method] = {};
if (method not in self.calcsList.keys()):
self.calcsList[method] = {};
for p in periodsToUse:
#If we've already calculated this for the stock
if (self.database.isCalculated(self.ticker, method, p)):
self.calcs[method][p], self.calcsList[method][p] = self.database.retrieveCalculation(self.ticker, method, p, self.startDate, self.endDate);
#If we haven't already calculated this
else:
funcToCall(self, period=p);
#'funcToCall' doesn't use a period
else:
#If we've already calculated this for the stock
if (self.database.isCalculated(self.ticker, method, None)):
self.calcs[method], self.calcsList[method] = self.database.retrieveCalculation(self.ticker, method, None, self.startDate, self.endDate);
#If we haven't already calculated this
else:
funcToCall(self);
self.objects = {} #Holds items such as Doji, Marubozu, Spinning Tops, etc
#self.identifyObjects(); #Marubozu, Doji, Spinning Tops, etc
#self.database.printDB(self.ticker);
#-----------------------------------Calculation Methods-----------------------------------------
#Should probably break these up by type (trend/momentum prediction, overlap studies, etc)
#KAMA > EMA > (SlopeMA == TMA == SMA)
def calculateSMA(self, period=12):
"""
Calculates the SMA for this stock with the given time period
SMA = sum(LastXAverages) / X
"""
date = self.earliestDate;
if ('SMA' not in self.calcs.keys()):
self.calcs['SMA'] = {};
elif (period in self.calcs['SMA'].keys()):
print("Already calculated SMA for period " + str(period));
return;
if (period not in self.calcs['SMA'].keys()):
self.calcs['SMA'][period] = {};
if ('SMA' not in self.calcsList.keys()):
self.calcsList['SMA'] = {};
if (period not in self.calcsList['SMA'].keys()):
self.calcsList['SMA'][period] = [];
lastPeriodCloses = [];
while(len(lastPeriodCloses) < period):
dateStr = str(date).split(' ')[0];
#Will err if we try to access a day the market was closed
try:
lastPeriodCloses.append( self.closes[dateStr] );
except KeyError:
pass;
date += pd.Timedelta('1 day');
while(date <= self.latestDate):
dateStr = str(date).split(' ')[0]
sma = sum(lastPeriodCloses)/period; #mean(lastPeriodCloses)
#Happens when we hit a weekend
todayClose = None;
try:
todayClose = self.closes[dateStr];
except KeyError:
date += pd.Timedelta('1 day');
continue;
#todayClose would be None if the API did not have close data for this date
if (todayClose is not None):
lastPeriodCloses.append(todayClose);
if (len(lastPeriodCloses) > period):
lastPeriodCloses.pop(0);
self.calcs['SMA'][period][dateStr] = sma;
self.calcsList['SMA'][period].append( [date, sma] );
date += pd.Timedelta('1 day');
self.database.addCalculation(self.ticker, 'SMA', period, self.calcsList['SMA'][period]);
def calculateTMA(self, period=12):
"""
Calculates the triangular moving average for a stock over a period of time
"""
if ('TMA' not in self.calcs.keys()):
self.calcs['TMA'] = {};
elif (period in self.calcs['TMA'].keys()):
print("Already calculated TMA for period " + str(period));
return;
if (period not in self.calcs['TMA'].keys()):
self.calcs['TMA'][period] = {};
if ('TMA' not in self.calcsList.keys()):
self.calcsList['TMA'] = {};
if (period not in self.calcsList['TMA'].keys()):
self.calcsList['TMA'][period] = [];
periodToTD = pd.Timedelta(str(period) + ' days');
date = self.earliestDate + periodToTD;
dateStr = str(date).split(' ')[0];
#Fill the initial array of closes
lastPeriodSMAs = [];
while(len(lastPeriodSMAs) < (period - 1)):
try:
lastPeriodSMAs.append(self.calcs['SMA'][period][dateStr]);
except KeyError as k:
pass;
date += pd.Timedelta('1 day');
dateStr = str(date).split(' ')[0];
while(date <= self.latestDate):
try:
lastPeriodSMAs.append(self.calcs['SMA'][period][dateStr]);
except KeyError:
date += pd.Timedelta('1 day');
dateStr = str(date).split(' ')[0];
tma = sum(lastPeriodSMAs) / period;
self.calcs['TMA'][period][dateStr] = tma;
self.calcsList['TMA'][period].append( [date, tma] );
date += pd.Timedelta('1 day');
dateStr = str(date).split(' ')[0];
self.database.addCalculation(self.ticker, 'TMA', period, self.calcsList['TMA'][period]);
def calculateSlopeMA(self, period=2):
"""
Calculates the moving average of the slope the closes over the specified period
slope = (y2 - y1)/(x2 - x1)
"""
if ('SlopeMA' not in self.calcs.keys()):
self.calcs['SlopeMA'] = {};
elif (period in self.calcs['SlopeMA'].keys()):
print("Already calculated SlopeMA for period " + str(period));
return;
if (period not in self.calcs['SlopeMA'].keys()):
self.calcs['SlopeMA'][period] = {};
if ('SlopeMA' not in self.calcsList.keys()):
self.calcsList['SlopeMA'] = {};
if (period not in self.calcsList['SlopeMA'].keys()):
self.calcsList['SlopeMA'][period] = [];
date = self.earliestDate;
dateStr = None;
lastPeriodCloses = [];
lastPeriodDates = [];
while(len(lastPeriodCloses) < (period - 1) ):
dateStr = str(date).split(' ')[0];
#Will err if we try to access a day the market was closed
try:
lastPeriodCloses.append( self.closes[dateStr] );
except KeyError:
pass;
date += pd.Timedelta('1 day');
slopeOverPeriod = 0;
while(date <= self.latestDate):
dateStr = str(date).split(' ')[0];
#Occurs when we try to access a day that the market was closed
todayClose = None;
try:
todayClose = self.closes[dateStr];
except KeyError:
date += pd.Timedelta('1 day');
continue;
#Occurs if the API does not have close information for this date
if (todayClose is not None):
lastPeriodCloses.append(todayClose);
lastPeriodDates.append(dateStr);
slopeOverPeriod = (lastPeriodCloses[-1] - lastPeriodCloses[0]) / np.busday_count(lastPeriodDates[-1], lastPeriodDates[0]);
self.calcs['SlopeMA'][period][dateStr] = slopeOverPeriod;
self.calcsList['SlopeMA'][period].append( [dateStr, slopeOverPeriod] );
if (len(lastPeriodCloses) > period):
lastPeriodCloses.pop(0);
lastPeriodDates.pop(0);
date += pd.Timedelta('1 day');
self.database.addCalculation(self.ticker, 'SlopeMA', period, self.calcsList['SlopeMA'][period]);
def calculateEMA(self, period=12, emaType='Single', percentage=None):
"""
Calculates the EMA for this stock based on the specified time period (in days)
For regular EMA, emaType='Single' (Default)
For double EMA, emaType='Double'
For triple EMA, emaType='Triple'
EITHER PERIOD OR PERCENTAGE SHOULD HAVE A VALUE, THE OTHER SHOULD BE NONE
To convert from percentage to period: (2/percentage) - 1
Multiplier: 2/(period + 1)
EMA = prevEMA + Multiplier * (Close - prevEMA)
"""
emaTypeToNumber = {
'Single': '',
'Double': '2',
'Triple': '3'
};
toMult = period;
if(period is None):
toMult = (2/percentage) - 1;
multiplier = 2/(toMult + 1);
periodToTD = pd.Timedelta(str(period) + ' days');
if ('EMA' not in self.calcs.keys()):
self.calcs['EMA'] = {};
elif (period in self.calcs['EMA'].keys()):
print("Already calculated EMA for period " + str(period));
return;
if (period not in self.calcs['EMA'].keys()):
self.calcs['EMA'][period] = {};
if ('EMA' not in self.calcsList.keys()):
self.calcsList['EMA'] = {};
if (period not in self.calcsList['EMA'].keys()):
self.calcsList['EMA'][period] = [];
date = self.earliestDate + periodToTD;
dateStr = str(date - pd.Timedelta('1 day')).split(' ')[0];
#UGLY AF!!! Used to prevent keyerrors when the starting day is a Monday
prevEma = None;
if (dateStr not in self.calcs['EMA'][period].keys()):
counter = 0;
while (prevEma is None):
d = date + pd.Timedelta(str(counter) + ' days');
dStr = str(d).split(' ')[0];
if (dStr not in self.calcs['SMA'][period].keys()):
counter += 1;
continue;
else:
prevEma = self.calcs['SMA'][period][dStr]
else:
prevEma = self.calcs['EMA'][period][dateStr];
#This makes it easier to do double/triple EMA
def ema(prevEma, multiplier, close):
return (prevEma + multiplier*(close - prevEma));
while(date <= self.latestDate):
dateStr = str(date).split(' ')[0]
todayClose = None;
#Happens when the market wasn't open, (holiday, weekend, etc)
try:
#ema and ema2 MUST be equal!!
todayClose = self.closes[dateStr];
except KeyError:
date += pd.Timedelta('1 day');
continue;
expMovAvg = None;
emaName = None;
#Occurs if the API didn't have close information for this date
if (todayClose is None):
expMovAvg = prevEma;
else:
expMovAvg = ema(prevEma, multiplier, todayClose);
expMovAvg2 = prevEma + multiplier * (todayClose - prevEma);
#( 2*EMA(n) ) – ( EMA(EMA(n)) ) where ‘n’ is #ofDays
if (emaType == 'Double'):
expMovAvg = 2 * ema(prevEma, multiplier, expMovAvg) - ema( prevEma, multiplier, ema(prevEma, multiplier, expMovAvg) );
#3*EMA(n) – 3*EMA(EMA(n)) + EMA(EMA(EMA(n)))
elif (emaType == 'Triple'):
expMovAvg = 3 * ema(prevEma, multiplier, expMovAvg) - 3 * ema( prevEma, multiplier, ema(prevEma, multiplier, expMovAvg) ) + ema(prevEma, multiplier, ema(prevEma, multiplier, ema(prevEma, multiplier, expMovAvg)));
prevEma = expMovAvg;
emaName = (emaTypeToNumber[emaType] + 'EMA');
self.calcs[emaName][period][dateStr] = expMovAvg;
self.calcsList[emaName][period].append( [date, expMovAvg] );
date += pd.Timedelta('1 day');
self.database.addCalculation(self.ticker, 'EMA', period, self.calcsList['EMA'][period]);
def calculateKAMA(self, period=10, fastest=2, slowest=30):
"""
Period: recommended to be 10
Fastest: recommended to be 2
Slowest: recommended to be 30
This is a Kaufman Adaptive Moving Average (KAMA):
1. Efficiency Ratio = Change / Volatility
a. Change = abs(Close – close10PeriodsAgo)
b. Volatility is the sum of the absolute value of the last ten price changes
2. FastestSC = 2/(FastestEMA + 1)
3. SlowestSC = 2/(SlowestEMA + 1)
4. Smoothing Constant = [ER * (FastestSC – SlowestSC) + SlowestSC]^2 = [ER * (2/3 – 2/31) + 2/31]^2
"""
date = self.earliestDate;
dateStr = None;
if ('KAMA' not in self.calcs.keys()):
self.calcs['KAMA'] = {};
elif (period in self.calcs['KAMA'].keys()):
print("Already calculated KAMA for period " + str(period));
return;
if (period not in self.calcs['KAMA'].keys()):
self.calcs['KAMA'][period] = {};
if ('KAMA' not in self.calcsList.keys()):
self.calcsList['KAMA'] = {};
if (period not in self.calcsList['KAMA'].keys()):
self.calcsList['KAMA'][period] = [];
lastPeriodCloses = [];
lastPeriodPriceChanges = [] #index corresponds directly to the same index in lastPeriodCloses
#Fill up our initial 'period' closes, so we can calculate 'Change' easily
while(len(lastPeriodCloses) < period):
dateStr = str(date).split(' ')[0];
#Will err if we try to access a day the market was closed
try:
todayClose = self.closes[dateStr]
except KeyError:
date += pd.Timedelta('1 day');
continue;
#Occurs when the API does not have close information for this date
if (todayClose is None):
date += pd.Timedelta('1 day');
continue;
lastPeriodCloses.append( self.closes[dateStr] );
#Calculate the most recent change
if (len(lastPeriodCloses) > 1):
lastPeriodPriceChanges.append( abs(lastPeriodCloses[-1] - lastPeriodCloses[-2]) );
else:
lastPeriodPriceChanges.append(0);
date += pd.Timedelta('1 day');
#Our first "prevKama" is just the SMA of the first 'period' closes
prevKama = sum(lastPeriodCloses)/len(lastPeriodCloses);
#Start doing our calculations
while (date <= self.latestDate):
dateStr = str(date).split(' ')[0];
#Occurs when we try to access a day that the market was closed
todayClose = None;
try:
todayClose = self.closes[dateStr];
except KeyError:
date += pd.Timedelta('1 day');
continue;
kama = prevKama;
#Make sure we don't do calculations on 'None' data
if (todayClose is not None):
#Change = abs(Close – close10PeriodsAgo)
change = abs(todayClose - lastPeriodCloses[0]);
#Volatility is the sum of the absolute value of the last ten price changes
volatility = sum(lastPeriodPriceChanges);
#Efficiency Ratio = Change / Volatility
er = change/volatility;
#Calculate our fastest Smoothing Constant
fastestSC = None;
if (fastest is None):
fastestSC = 2/(2 + 1);
else:
fastestSC = 2/(fastest + 1);
#Calculate our slowest Smoothing Constant
slowestSC = None;
if (slowest is None):
slowestSC = 2/(10 + 1);
else:
slowestSC = 2/(slowest + 1);
#Our real Smoothing Constant = [ER * (FastestSC – SlowestSC) + SlowestSC]^2 = [ER * (2/3 – 2/31) + 2/31]^2
sc = (er * (fastestSC - slowestSC) + slowestSC) ** 2; # '**' is exponentiation
#KAMA = prevKAMA + SmoothingConstant * (Close - prevKAMA)
kama = prevKama + sc * (todayClose - prevKama);
self.calcs['KAMA'][period][dateStr] = kama;
self.calcsList['KAMA'][period].append( [date, kama] );
#Fix our last KAMA and our last 'period' information (close and change)
prevKama = kama;
lastPeriodCloses.append(todayClose);
lastPeriodCloses.pop(0);
lastPeriodPriceChanges.append( abs(lastPeriodCloses[-1] - lastPeriodCloses[-2]) );
lastPeriodPriceChanges.pop(0)
date += pd.Timedelta('1 day');
self.database.addCalculation(self.ticker, 'KAMA', period, self.calcsList['KAMA'][period]);
def calculateStochastics(self):
"""
'K': 100 * [(C - L5) / (H5 - L5)]
'D': Average of last three K's
C: most recent closing price
L5Close: lowest of the five previous closing prices
HX: highest of the X previous sessions
LX: lowest of the X previous sessions
Starting at the earliest date, calculate the above and store
Each date in stochastics is a dictionary holding 'K' and 'D'
"""
stochastics = {};
stochasticList = []
lastFiveCloses = [];
lastFiveHighs = [];
lastFiveLows = [];
date = self.earliestDate;
currentDate = self.latestDate;
lastThreeKs = [];
#Calculate stochastics for every date we have
prevK = 0;
prevD = 0;
while(date <= currentDate):
dateStr = str(date).split(' ')[0]
closeValue = None;
highValue = None;
lowValue = None;
try:
#Retrieve closing value for this date
closeValue = self.closes[dateStr];
#Retrieve highest value for this date
highValue = self.highs[dateStr];
#Retrieve lowest value for this date
lowValue = self.lows[dateStr];
except KeyError: #odds are this is caused by this date being a non-trading day, or not having today's close
date += pd.Timedelta("1 day");
continue;
#Occurs when the API doesn't have particular values, just take the last ones...
if (closeValue is None or highValue is None or lowValue is None):
k = prevK;
d = prevD;
else:
lastFiveCloses.append(closeValue);
if (len(lastFiveCloses) > 5):
lastFiveCloses.pop(0);
lastFiveHighs.append(highValue);
if (len(lastFiveHighs) > 5):
lastFiveHighs.pop(0);
lastFiveLows.append(lowValue);
if (len(lastFiveLows) > 5):
lastFiveLows.pop(0);
#Calculate 'k' point and 'd' point
try:
k = 100 * ( (lastFiveCloses[-1] - min(lastFiveLows)) / (max(lastFiveHighs) - min(lastFiveLows)) )
d = ( sum(lastThreeKs) / 3);
except ZeroDivisionError as zde:
date += pd.Timedelta("1 day");
continue;
prevK = k;
prevD = d;
lastThreeKs.append(k);
if (len(lastThreeKs) > 3):
lastThreeKs.pop(0);
#Store values
stochastics[dateStr] = {};
stochastics[dateStr]['K'] = k;
stochastics[dateStr]['D'] = d;
stochasticList.append( [date, k, d] );
date += pd.Timedelta("1 day");
self.database.addCalculation(self.ticker, 'Stochastics', None, stochasticList);
self.calcs['Stochastics'] = stochastics;
self.calcsList['Stochastics'] = stochasticList;
def calculateAD(self):
"""
Money Flow Multiplier = [(Close - Low) - (High - Close)] / (High - Low) | Should be -1 <= x <= 1
Money Flow Volume = MFM * (Volume for Period)
Accumulation Distribution Point = (Previous AD) + MFV
Period is 'day', 'month', 'year'
"""
adDict = {};
adList = [];
ad = 0;
for date in sorted(self.volumes.keys()):
close = None;
high = None;
low = None;
try:
close = self.closes[date];
high = self.highs[date];
low = self.lows[date];
except KeyError:
continue;
#Occurs if the API does not have some data for this date
if (close is not None and high is not None and low is not None):
#Calculate daily MFM and MFV
#-MFM as zero affects nothing, and helps avoid ZeroDivisionError (when high == low)
mfm = 0;
if (high != low):
mfm = ((close - low) - (high - close)) / (high - low);
mfv = mfm * self.volumes[date];
ad += mfv;
#Fill in our data structures
adDict[date] = ad;
adList.append( [pd.to_datetime(date), ad] );
self.database.addCalculation(self.ticker, 'AD', None, adList);
self.calcs['AD'] = adDict;
self.calcsList['AD'] = adList;
def calculateAroon(self, period=12):
"""
Measures if a security is in a trend, the magnitude of that trend, and whether that trend is likely to reverse (or not)
Aroon Up: ( (25 - Days Since 25 Day High) / 25 ) * 100
Aroon Down: ( (25 - Days Since 25 Day Low) / 25 ) * 100
"""
period
aroonDict = {};
aroonList = [];
last25Highs = [];
last25Lows = [];
date = self.earliestDate;
currentDate = self.latestDate;
#Calculate Aroon Indicators for every date we have
aroonUp = 0;
aroonDown = 0;
while(date <= currentDate):
dateStr = str(date).split(' ')[0]
#Retrieve highest value for this date
highValue = None;
try:
highValue = self.highs[dateStr];
except KeyError:
date += pd.Timedelta('1 day');
continue;
#Retrieve lowest value for this date
lowValue = None;
try:
lowValue = self.lows[dateStr];
except KeyError:
date += pd.Timedelta('1 day');
continue;
#Occurs if the API does not have some information for that day
if (highValue is not None and lowValue is not None):
last25Highs.append(highValue);
if (len(last25Highs) > period):
last25Highs.pop(0);
last25Lows.append(lowValue);
if (len(last25Lows) > period):
last25Lows.pop(0);
#Calculate Aroon Up
timeSinceHigh = period - last25Highs.index(max(last25Highs));
aroonUp = ( (period - timeSinceHigh) / period ) * 100;
#Calculate Aroon Down
timeSinceLow = period - last25Lows.index(min(last25Lows));
aroonDown = ( (period - timeSinceLow) / period ) * 100;
#Ensure we don't get KeyErrors
if (period not in aroonDict.keys()):
aroonDict[period] = {};
if (dateStr not in aroonDict[period].keys()):
aroonDict[period][dateStr] = {};
#Fill in our data structures
aroonDict[period][dateStr]["Up"] = aroonUp;
aroonDict[period][dateStr]["Down"] = aroonDown;
aroonList.append( [date, aroonUp, aroonDown] );
#Increment our counter
date += pd.Timedelta('1 day');
self.database.addCalculation(self.ticker, 'Aroon', period, aroonList);
self.calcs['Aroon'] = aroonDict;
if ('Aroon' not in self.calcsList.keys()):
self.calcsList['Aroon'] = {};
self.calcsList['Aroon'][period] = aroonList;
def calculateOBV(self):
"""
Calculates the On-Balance Volume for a stock
i. If todayClose > yesterdayClose, currentOBV = prevOBV + todayVolume
ii. If todayClose < yesterdayClose, currentOBV = prevOBV – todayVolume
iii. If todayClose == yesterdayClose, currentOBV = prevOBV
When plotting, don't have Y axis labels. We really only care about the
changes from point to point (slope), and what the line looks like.
-High positive slope may indicate a soon-to-be price rise
-High negative slope may inticate a soon-to-be price fall
"""
obv = 0;
date = self.earliestDate;
dateStr = str(date).split(' ')[0];
obvDict = {};
obvList = [];
#Get our first data point
yesterdayClose = None;
while(yesterdayClose is None):
volume = None;
try:
yesterdayClose = self.closes[dateStr];
volume = self.volumes[dateStr];
obv += volume;
except KeyError:
pass;
else:
#Need to add info to data structures before increasing date!
obvDict[dateStr] = obv;
obvList.append( [date, obv] );
date += pd.Timedelta('1 day');
dateStr = str(date).split(' ')[0];
#Calculate the rest of the data points
while(date <= self.latestDate):
todayClose = None;
todayVolume = None;
#KeyError occurs when we try to access a day the market was closed
try:
todayClose = self.closes[dateStr];
todayVolume = self.volumes[dateStr];
except KeyError:
date += pd.Timedelta('1 day');
dateStr = str(date).split(' ')[0];
continue;
if (todayClose > yesterdayClose):
obv = obv + todayVolume;
elif (todayClose < yesterdayClose):
obv = obv - todayVolume;
obvDict[dateStr] = obv;
obvList.append( [date, obv] );
date += pd.Timedelta('1 day');
dateStr = str(date).split(' ')[0];
yesterdayClose = todayClose;
self.calcs['OBV'] = obvDict;
self.calcsList['OBV'] = obvList;
self.database.addCalculation(self.ticker, 'OBV', None, obvList);
#-------------------------------Trend Identifier Methods--------------------------------
def identifyObjects(self, candleOC, dojiOC, marubozuDiff):
"""
Identify all objects of relevance (IN THE APPROPRIATE ORDER!!)
"""
identifyLongCandles(candleOC);
identifyDoji(dojiOC);
identifyMarubozu(marubozuDiff);
def identifyLongCandles(self, openCloseMultiplier):
"""
Identify dates that have long white or black candles
hiLoMultiplier: the difference required to be a candle (i.e. high must be hiLoMultiplier*low)
Long White Candle: 'LWC' : Open >>>>> Close
Long Black Candle: 'LBC' : Close >>>>> Open
HIGH VS LOW PERCENTAGE/MULTIPLIER TO BE A LWC/LBC?
-e.g. for LWC open must be 1.5x close (or whatever it is)
"""
if ('Candles' in self.objects.keys()):
print("Already calculated Long Candles for " + self.ticker);
return;
#Ensure no KeyErrors
self.objects['Candles'] = {}
self.objects['Candles']['LWC'] = {};
self.objects['Candles']['LBC'] = {};
#First Date!
date = self.earliestDate;
dateStr = str(date).split(' ')[0];
#Start Calculations
dayOpen = None;
dayClose = None;
while(date != self.latestDate):
#Occurs when we have a day that the market was closed
try:
dayOpen = self.opens[dateStr];
dayClose = self.closes[dateStr];
except KeyError:
date += pd.Timedelta('1 day');
dateStr = str(date).split(' ')[0];
continue;
#LWC
if (dayOpen >= dayClose * openCloseMultiplier):
self.objects['Candles']['LWC'][dateStr] = True;
#LBC
elif (dayClose >= dayOpen * openCloseMultiplier):
self.objects['Candles']['LBC'][dateStr] = True;
date += pd.Timedelta('1 day');
dateStr = str(date).split(' ')[0];
def identifyDoji(self, openCloseMultiplier):
"""
Identify any points in time where there are any form of doji (open and close extremely close together)
Regular Doji: 'D' : open and close are extremely close together
Doji Evening Star: 'DES' : Long White Candle + Doji
Doji Morning Star: 'DMS' : Long Black Candle + Doji
Long Legged Doji: 'LLD' : Small Real Body w/ long upper and lower shadows of approximately equal length
Dragonfly Doji: 'DD' : Open, close, and high are all equal, with a long lower shadow
: Look for Long White Candle beforehand
Gravestone Doji: 'GD' : Open, close, and low are all equal, with a long upper shadow
: Look for Long Black Candle beforehand
"""
#Day + 1 b/c we are looking for LWC or LBC before some of these
if ('LWC' not in self.objects.keys()):
print("Please run 'identifyLongCandles' before 'identifyDoji'");
sys.exit();
if ('Doji' in self.objects.keys()):
print("Already identified Doji locations for " + self.ticker);
return;
self.objects['Doji'] = {}; #Doji holder
self.objects['Doji']['D'] = {}; #Regular Doji
self.objects['Doji']['DES'] = {}; #Doji Evening Star
self.objects['Doji']['DMS'] = {}; #Doji Morning Star
self.objects['Doji']['LLD'] = {}; #Long Legged Doji
self.objects['Doji']['DD'] = {}; #Dragonfly Doji
self.objects['Doji']['GD'] = {}; #Gravestone Doji
date = self.earliestDate + pd.Timedelta('1 day');
dateStr = str(date).split(' ')[0];
while(date != self.latestDate):
#hasLWC = (dateStr in self.);
#hasLBC = ();
date += pd.Timedelta('1 day');
dateStr = str(date).split(' ')[0];
def identifyMarubozu(self, difference):
"""
Identify any points in time where there are any form of marubozu
difference: how much different the open/high, etc need to be to form a Marubozu
Black Marubozu: open was the high and the close was the low
White Marubozu: open was the low and the close was the high
"""
date = self.earliestDate;
dateStr = str(date).split(' ')[0];
if ('Marubozu' in self.objects.keys()):
print("Already calculated Marubozu Locations for " + self.ticker);
self.objects['Marubozu'] = {};
self.objects['Marubozu']['B'] = {};
self.objects['Marubozu']['W'] = {};
dayOpen = None;
dayClose = None;
dayHigh = None;
dayLow = None;
dayVolume = None;
while(date != self.latestDate):
#Grab all the information we need
try:
dayOpen = self.opens[date];
dayClose = self.closes[date];
dayHigh = self.highs[date];
dayLow = self.lows[date];
except KeyError: #We need all this info, so missing any one piece means we skip the day (unfortunately)
continue;
#MARUBOZU IN TERMS OF PERCENTAGE?
#-E.G. for Black Marubozu how close to the high does the open have to be for it to qualify?
if (dayOpen == dayHigh and dayClose == dayLow):
self.objects['Marubozu']['B'][dateStr] = True;
elif (dayClose == dayHigh and dayOpen == dayLow):
self.objects['Marubozu']['W'][dateStr] = True;
date += pd.Timedelta('1 day');
dateStr = str(date).split(' ')[0];
#-------------------------------Plotting Methods----------------------------------------
def plotPEtoPBV(self, startDate, endDate):
'''Plots the PE * PBV value for a stock'''
#Startdate and enddate must be in pd_datetime format already
lowestDecisionRatio = 0;
#Get our information from the object
dataList = [];
for key in self.information["PE"]:
date = pd.to_datetime(key);
if (date < startDate or date > endDate):
continue;
#Calculate the decision ratio
decisionRatio = None;
try:
decisionRatio = (self.information["PE"][key] * self.information["PBV"][key]);
except KeyError:
continue;
if (decisionRatio < lowestDecisionRatio):
lowestDecisionRatio = decisionRatio;
dataList.append( [date, decisionRatio] );
fig, ax = plt.subplots();
#Plot data
ax.scatter(*zip(*dataList));
ax.hlines(self.limit, startDate, endDate, color='r', linewidth=3);
#Format plot
fig.suptitle(self.ticker + " (P/E)*(P/BV) Score Over Time")
if (startDate is not None and endDate is not None):
plt.xlim(startDate, endDate);
plt.ylim(lowestDecisionRatio, 100);
ax.set_xlabel("Date");
ax.set_ylabel("Indicator");
fig.autofmt_xdate();
#Show plot
plt.show();
def plotAD(self, startDate, endDate):
"""
Plot this stock's Accumulation/Distribition Line
"""
toPlot = self.adList
#CAN EITHER TO THIS, OR JUST SET THE XLIM
if (startDate is None):
if (endDate is None):
pass;
else:
toPlot = [info for info in self.adList if info[0] <= endDate];
else:
if (endDate is None):
toPlot = [info for info in self.adList if info[0] >= startDate];
else:
toPlot = [info for info in self.adList if info[0] <= endDate and info[0] >= startDate]
fig, ax = plt.subplots();
ax.plot(*zip(*toPlot));
ax.hlines(0, toPlot[0][0], toPlot[-1][0], linewidth=2);
#Formatter method to get us "10M, 1M, etc"
def millions(x, pos):
return '%1.0fM' % (x*1e-6)
formatter = mticker.FuncFormatter(millions);
fig.suptitle(self.ticker + " Accumulation/Distribution Line");
ax.set_xlabel("Date");
ax.set_ylabel("A/D");
ax.yaxis.set_major_formatter(formatter);
fig.autofmt_xdate();
plt.show();
def plotClosesLineGraph(self, startDate, endDate):
"""
Plots the closes for a stock as a line graph
Rough granularity, only at the day level b/c we don't have access to intraday trade info
"""
fig, ax = plt.subplots();
ax.plot(*zip(*self.closeList));
fig.suptitle(self.ticker + " Closes");
ax.set_xlabel("Date");
ax.set_ylabel("Close Price");
fig.autofmt_xdate();
plt.show();
def plotClosesCandlestickOHLC(self, startDate, endDate, maAndPeriod):
"""
Plots the closes for a stock as a Candlestick OHLC plot
takes date, open, high, low, close, volume
movingAverages is a list of strings ('SMA', 'EMA', etc) identifying which MAs we plot
"""
dayOpen = None;
dayClose = None;
dayHigh = None;
dayLow = None;
dayVolume = None;
dohlcv = []; #Date Open High Low Close Volume
for date in self.opens.keys():
datetime = pd.to_datetime(date);
if (datetime < startDate or datetime > endDate):
continue;
try:
dayDate = mdates.date2num(datetime);
dayOpen = self.opens[date];
dayClose = self.closes[date];
dayHigh = self.highs[date];
dayLow = self.lows[date];
dayVolume = self.volumes[date];
except KeyError: #We need all this info, so missing any one piece means we skip the day (unfortunately)
continue;
#Some of these seem to be randomly missing...
if (dayDate is None or dayOpen is None or dayClose is None or dayHigh is None or dayLow is None or dayVolume is None):
continue;
dohlcv.append([dayDate, dayOpen, dayHigh, dayLow, dayClose, dayVolume]);
fig, ax = plt.subplots();
candlestick_ohlc(ax, dohlcv, colorup='#77d879', colordown='#db3f3f');
haveLabel = False;
for ma in maAndPeriod.keys():
p = maAndPeriod[ma];
#Haven't run the calcution for this moving average at this period
if (p not in self.calcsList[ma]):
funcToCall = getattr(Stock, 'calculate' + ma);
#HELLO ANONYMOUS FUNCTION CALL!
funcToCall(self, period=p);
#Prevent error if we try to plot a MA or period that doesn't exist
try:
newMasList = [entry for entry in self.calcsList[ma][p] if (entry[0] >= startDate and entry[0] <= endDate)];
ax.plot(*zip(*newMasList), label=(ma + str(p)));
haveLabel = True;
except KeyError:
print("BORKED! " + str(ma) + " :: " + str(p));
continue;
ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d'));
ax.xaxis.set_major_locator(mticker.MaxNLocator(10));
ax.grid(True);
plt.xlabel('Date');
plt.ylabel('Close');
plt.title(self.ticker.upper() + " Closes from " + str(startDate).split(' ')[0] + " to " + str(endDate).split(' ')[0]);
#Only add a legend if we have things that need a legend
if (haveLabel):
plt.legend();
fig.autofmt_xdate();
plt.show();
#-------------------------------Decision Making Methods----------------------------------------
def makeDecisionInTimeframe(self, startDate, endDate, outlier):
"""
Decide if this stock meets the limit criteria for a given timeframe
"""
sDate = startDate;
eDate = endDate
if (startDate is None):
sDate = pd.to_datetime('1800-01-01');
if (endDate is None):
eDate = pd.to_datetime('today');
ratioList = []
for key in self.pe:
keyDate = pd.to_datetime(key);
if (keyDate < sDate or keyDate > eDate):
continue;
try:
pe = self.pe[key];
pbv = self.pbv[key];
except KeyError:
continue;
ratio = pe*pbv;
ratioList.append(ratio);
npRatioList = np.asarray(ratioList);
avg = np.mean(npRatioList);
std = np.std(npRatioList);
self.ratio = avg;
#Method to reject any data points outside 'm' standard deviations from the mean
def reject_outliers(data, m):
return data[abs(data - np.mean(data)) < m * np.std(data)]
npRatioListNoOutliers = reject_outliers(npRatioList, outlier);
avg2 = np.mean(npRatioListNoOutliers);
std2 = np.std(npRatioListNoOutliers);
self.ratioWithoutOutliers = avg2;
self.decision = (avg2 <= self.limit);
self.decision = (avg <= self.limit);
if __name__ == "__main__":
print("Stock.py isn't made to be called directly!");
sys.exit();
| mit |
mutirri/bokeh | bokeh/tests/test_protocol.py | 42 | 3959 | from __future__ import absolute_import
import unittest
from unittest import skipIf
import numpy as np
try:
import pandas as pd
is_pandas = True
except ImportError as e:
is_pandas = False
class TestBokehJSONEncoder(unittest.TestCase):
def setUp(self):
from bokeh.protocol import BokehJSONEncoder
self.encoder = BokehJSONEncoder()
def test_fail(self):
self.assertRaises(TypeError, self.encoder.default, {'testing': 1})
@skipIf(not is_pandas, "pandas does not work in PyPy.")
def test_panda_series(self):
s = pd.Series([1, 3, 5, 6, 8])
self.assertEqual(self.encoder.default(s), [1, 3, 5, 6, 8])
def test_numpyarray(self):
a = np.arange(5)
self.assertEqual(self.encoder.default(a), [0, 1, 2, 3, 4])
def test_numpyint(self):
npint = np.asscalar(np.int64(1))
self.assertEqual(self.encoder.default(npint), 1)
self.assertIsInstance(self.encoder.default(npint), int)
def test_numpyfloat(self):
npfloat = np.float64(1.33)
self.assertEqual(self.encoder.default(npfloat), 1.33)
self.assertIsInstance(self.encoder.default(npfloat), float)
def test_numpybool_(self):
nptrue = np.bool_(True)
self.assertEqual(self.encoder.default(nptrue), True)
self.assertIsInstance(self.encoder.default(nptrue), bool)
@skipIf(not is_pandas, "pandas does not work in PyPy.")
def test_pd_timestamp(self):
ts = pd.tslib.Timestamp('April 28, 1948')
self.assertEqual(self.encoder.default(ts), -684115200000)
class TestSerializeJson(unittest.TestCase):
def setUp(self):
from bokeh.protocol import serialize_json, deserialize_json
self.serialize = serialize_json
self.deserialize = deserialize_json
def test_with_basic(self):
self.assertEqual(self.serialize({'test': [1, 2, 3]}), '{"test": [1, 2, 3]}')
def test_with_np_array(self):
a = np.arange(5)
self.assertEqual(self.serialize(a), '[0, 1, 2, 3, 4]')
@skipIf(not is_pandas, "pandas does not work in PyPy.")
def test_with_pd_series(self):
s = pd.Series([0, 1, 2, 3, 4])
self.assertEqual(self.serialize(s), '[0, 1, 2, 3, 4]')
def test_nans_and_infs(self):
arr = np.array([np.nan, np.inf, -np.inf, 0])
serialized = self.serialize(arr)
deserialized = self.deserialize(serialized)
assert deserialized[0] == 'NaN'
assert deserialized[1] == 'Infinity'
assert deserialized[2] == '-Infinity'
assert deserialized[3] == 0
@skipIf(not is_pandas, "pandas does not work in PyPy.")
def test_nans_and_infs_pandas(self):
arr = pd.Series(np.array([np.nan, np.inf, -np.inf, 0]))
serialized = self.serialize(arr)
deserialized = self.deserialize(serialized)
assert deserialized[0] == 'NaN'
assert deserialized[1] == 'Infinity'
assert deserialized[2] == '-Infinity'
assert deserialized[3] == 0
@skipIf(not is_pandas, "pandas does not work in PyPy.")
def test_datetime_types(self):
"""should convert to millis
"""
idx = pd.date_range('2001-1-1', '2001-1-5')
df = pd.DataFrame({'vals' :idx}, index=idx)
serialized = self.serialize({'vals' : df.vals,
'idx' : df.index})
deserialized = self.deserialize(serialized)
baseline = {u'vals': [978307200000,
978393600000,
978480000000,
978566400000,
978652800000],
u'idx': [978307200000,
978393600000,
978480000000,
978566400000,
978652800000]
}
assert deserialized == baseline
if __name__ == "__main__":
unittest.main()
| bsd-3-clause |
luo66/scikit-learn | sklearn/utils/multiclass.py | 83 | 12343 |
# Author: Arnaud Joly, Joel Nothman, Hamzeh Alsalhi
#
# License: BSD 3 clause
"""
Multi-class / multi-label utility function
==========================================
"""
from __future__ import division
from collections import Sequence
from itertools import chain
from scipy.sparse import issparse
from scipy.sparse.base import spmatrix
from scipy.sparse import dok_matrix
from scipy.sparse import lil_matrix
import numpy as np
from ..externals.six import string_types
from .validation import check_array
from ..utils.fixes import bincount
def _unique_multiclass(y):
if hasattr(y, '__array__'):
return np.unique(np.asarray(y))
else:
return set(y)
def _unique_indicator(y):
return np.arange(check_array(y, ['csr', 'csc', 'coo']).shape[1])
_FN_UNIQUE_LABELS = {
'binary': _unique_multiclass,
'multiclass': _unique_multiclass,
'multilabel-indicator': _unique_indicator,
}
def unique_labels(*ys):
"""Extract an ordered array of unique labels
We don't allow:
- mix of multilabel and multiclass (single label) targets
- mix of label indicator matrix and anything else,
because there are no explicit labels)
- mix of label indicator matrices of different sizes
- mix of string and integer labels
At the moment, we also don't allow "multiclass-multioutput" input type.
Parameters
----------
*ys : array-likes,
Returns
-------
out : numpy array of shape [n_unique_labels]
An ordered array of unique labels.
Examples
--------
>>> from sklearn.utils.multiclass import unique_labels
>>> unique_labels([3, 5, 5, 5, 7, 7])
array([3, 5, 7])
>>> unique_labels([1, 2, 3, 4], [2, 2, 3, 4])
array([1, 2, 3, 4])
>>> unique_labels([1, 2, 10], [5, 11])
array([ 1, 2, 5, 10, 11])
"""
if not ys:
raise ValueError('No argument has been passed.')
# Check that we don't mix label format
ys_types = set(type_of_target(x) for x in ys)
if ys_types == set(["binary", "multiclass"]):
ys_types = set(["multiclass"])
if len(ys_types) > 1:
raise ValueError("Mix type of y not allowed, got types %s" % ys_types)
label_type = ys_types.pop()
# Check consistency for the indicator format
if (label_type == "multilabel-indicator" and
len(set(check_array(y, ['csr', 'csc', 'coo']).shape[1]
for y in ys)) > 1):
raise ValueError("Multi-label binary indicator input with "
"different numbers of labels")
# Get the unique set of labels
_unique_labels = _FN_UNIQUE_LABELS.get(label_type, None)
if not _unique_labels:
raise ValueError("Unknown label type: %s" % repr(ys))
ys_labels = set(chain.from_iterable(_unique_labels(y) for y in ys))
# Check that we don't mix string type with number type
if (len(set(isinstance(label, string_types) for label in ys_labels)) > 1):
raise ValueError("Mix of label input types (string and number)")
return np.array(sorted(ys_labels))
def _is_integral_float(y):
return y.dtype.kind == 'f' and np.all(y.astype(int) == y)
def is_multilabel(y):
""" Check if ``y`` is in a multilabel format.
Parameters
----------
y : numpy array of shape [n_samples]
Target values.
Returns
-------
out : bool,
Return ``True``, if ``y`` is in a multilabel format, else ```False``.
Examples
--------
>>> import numpy as np
>>> from sklearn.utils.multiclass import is_multilabel
>>> is_multilabel([0, 1, 0, 1])
False
>>> is_multilabel([[1], [0, 2], []])
False
>>> is_multilabel(np.array([[1, 0], [0, 0]]))
True
>>> is_multilabel(np.array([[1], [0], [0]]))
False
>>> is_multilabel(np.array([[1, 0, 0]]))
True
"""
if hasattr(y, '__array__'):
y = np.asarray(y)
if not (hasattr(y, "shape") and y.ndim == 2 and y.shape[1] > 1):
return False
if issparse(y):
if isinstance(y, (dok_matrix, lil_matrix)):
y = y.tocsr()
return (len(y.data) == 0 or np.ptp(y.data) == 0 and
(y.dtype.kind in 'biu' or # bool, int, uint
_is_integral_float(np.unique(y.data))))
else:
labels = np.unique(y)
return len(labels) < 3 and (y.dtype.kind in 'biu' or # bool, int, uint
_is_integral_float(labels))
def type_of_target(y):
"""Determine the type of data indicated by target `y`
Parameters
----------
y : array-like
Returns
-------
target_type : string
One of:
* 'continuous': `y` is an array-like of floats that are not all
integers, and is 1d or a column vector.
* 'continuous-multioutput': `y` is a 2d array of floats that are
not all integers, and both dimensions are of size > 1.
* 'binary': `y` contains <= 2 discrete values and is 1d or a column
vector.
* 'multiclass': `y` contains more than two discrete values, is not a
sequence of sequences, and is 1d or a column vector.
* 'multiclass-multioutput': `y` is a 2d array that contains more
than two discrete values, is not a sequence of sequences, and both
dimensions are of size > 1.
* 'multilabel-indicator': `y` is a label indicator matrix, an array
of two dimensions with at least two columns, and at most 2 unique
values.
* 'unknown': `y` is array-like but none of the above, such as a 3d
array, sequence of sequences, or an array of non-sequence objects.
Examples
--------
>>> import numpy as np
>>> type_of_target([0.1, 0.6])
'continuous'
>>> type_of_target([1, -1, -1, 1])
'binary'
>>> type_of_target(['a', 'b', 'a'])
'binary'
>>> type_of_target([1.0, 2.0])
'binary'
>>> type_of_target([1, 0, 2])
'multiclass'
>>> type_of_target([1.0, 0.0, 3.0])
'multiclass'
>>> type_of_target(['a', 'b', 'c'])
'multiclass'
>>> type_of_target(np.array([[1, 2], [3, 1]]))
'multiclass-multioutput'
>>> type_of_target([[1, 2]])
'multiclass-multioutput'
>>> type_of_target(np.array([[1.5, 2.0], [3.0, 1.6]]))
'continuous-multioutput'
>>> type_of_target(np.array([[0, 1], [1, 1]]))
'multilabel-indicator'
"""
valid = ((isinstance(y, (Sequence, spmatrix)) or hasattr(y, '__array__'))
and not isinstance(y, string_types))
if not valid:
raise ValueError('Expected array-like (array or non-string sequence), '
'got %r' % y)
if is_multilabel(y):
return 'multilabel-indicator'
try:
y = np.asarray(y)
except ValueError:
# Known to fail in numpy 1.3 for array of arrays
return 'unknown'
# The old sequence of sequences format
try:
if (not hasattr(y[0], '__array__') and isinstance(y[0], Sequence)
and not isinstance(y[0], string_types)):
raise ValueError('You appear to be using a legacy multi-label data'
' representation. Sequence of sequences are no'
' longer supported; use a binary array or sparse'
' matrix instead.')
except IndexError:
pass
# Invalid inputs
if y.ndim > 2 or (y.dtype == object and len(y) and
not isinstance(y.flat[0], string_types)):
return 'unknown' # [[[1, 2]]] or [obj_1] and not ["label_1"]
if y.ndim == 2 and y.shape[1] == 0:
return 'unknown' # [[]]
if y.ndim == 2 and y.shape[1] > 1:
suffix = "-multioutput" # [[1, 2], [1, 2]]
else:
suffix = "" # [1, 2, 3] or [[1], [2], [3]]
# check float and contains non-integer float values
if y.dtype.kind == 'f' and np.any(y != y.astype(int)):
# [.1, .2, 3] or [[.1, .2, 3]] or [[1., .2]] and not [1., 2., 3.]
return 'continuous' + suffix
if (len(np.unique(y)) > 2) or (y.ndim >= 2 and len(y[0]) > 1):
return 'multiclass' + suffix # [1, 2, 3] or [[1., 2., 3]] or [[1, 2]]
else:
return 'binary' # [1, 2] or [["a"], ["b"]]
def _check_partial_fit_first_call(clf, classes=None):
"""Private helper function for factorizing common classes param logic
Estimators that implement the ``partial_fit`` API need to be provided with
the list of possible classes at the first call to partial_fit.
Subsequent calls to partial_fit should check that ``classes`` is still
consistent with a previous value of ``clf.classes_`` when provided.
This function returns True if it detects that this was the first call to
``partial_fit`` on ``clf``. In that case the ``classes_`` attribute is also
set on ``clf``.
"""
if getattr(clf, 'classes_', None) is None and classes is None:
raise ValueError("classes must be passed on the first call "
"to partial_fit.")
elif classes is not None:
if getattr(clf, 'classes_', None) is not None:
if not np.all(clf.classes_ == unique_labels(classes)):
raise ValueError(
"`classes=%r` is not the same as on last call "
"to partial_fit, was: %r" % (classes, clf.classes_))
else:
# This is the first call to partial_fit
clf.classes_ = unique_labels(classes)
return True
# classes is None and clf.classes_ has already previously been set:
# nothing to do
return False
def class_distribution(y, sample_weight=None):
"""Compute class priors from multioutput-multiclass target data
Parameters
----------
y : array like or sparse matrix of size (n_samples, n_outputs)
The labels for each example.
sample_weight : array-like of shape = (n_samples,), optional
Sample weights.
Returns
-------
classes : list of size n_outputs of arrays of size (n_classes,)
List of classes for each column.
n_classes : list of integrs of size n_outputs
Number of classes in each column
class_prior : list of size n_outputs of arrays of size (n_classes,)
Class distribution of each column.
"""
classes = []
n_classes = []
class_prior = []
n_samples, n_outputs = y.shape
if issparse(y):
y = y.tocsc()
y_nnz = np.diff(y.indptr)
for k in range(n_outputs):
col_nonzero = y.indices[y.indptr[k]:y.indptr[k + 1]]
# separate sample weights for zero and non-zero elements
if sample_weight is not None:
nz_samp_weight = np.asarray(sample_weight)[col_nonzero]
zeros_samp_weight_sum = (np.sum(sample_weight) -
np.sum(nz_samp_weight))
else:
nz_samp_weight = None
zeros_samp_weight_sum = y.shape[0] - y_nnz[k]
classes_k, y_k = np.unique(y.data[y.indptr[k]:y.indptr[k + 1]],
return_inverse=True)
class_prior_k = bincount(y_k, weights=nz_samp_weight)
# An explicit zero was found, combine its wieght with the wieght
# of the implicit zeros
if 0 in classes_k:
class_prior_k[classes_k == 0] += zeros_samp_weight_sum
# If an there is an implict zero and it is not in classes and
# class_prior, make an entry for it
if 0 not in classes_k and y_nnz[k] < y.shape[0]:
classes_k = np.insert(classes_k, 0, 0)
class_prior_k = np.insert(class_prior_k, 0,
zeros_samp_weight_sum)
classes.append(classes_k)
n_classes.append(classes_k.shape[0])
class_prior.append(class_prior_k / class_prior_k.sum())
else:
for k in range(n_outputs):
classes_k, y_k = np.unique(y[:, k], return_inverse=True)
classes.append(classes_k)
n_classes.append(classes_k.shape[0])
class_prior_k = bincount(y_k, weights=sample_weight)
class_prior.append(class_prior_k / class_prior_k.sum())
return (classes, n_classes, class_prior)
| bsd-3-clause |
pravsripad/mne-python | examples/decoding/plot_decoding_unsupervised_spatial_filter.py | 29 | 2496 | """
==================================================================
Analysis of evoked response using ICA and PCA reduction techniques
==================================================================
This example computes PCA and ICA of evoked or epochs data. Then the
PCA / ICA components, a.k.a. spatial filters, are used to transform
the channel data to new sources / virtual channels. The output is
visualized on the average of all the epochs.
"""
# Authors: Jean-Remi King <[email protected]>
# Asish Panda <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.decoding import UnsupervisedSpatialFilter
from sklearn.decomposition import PCA, FastICA
print(__doc__)
# Preprocess data
data_path = sample.data_path()
# Load and filter data, set up epochs
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.1, 0.3
event_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4)
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 20, fir_design='firwin')
events = mne.read_events(event_fname)
picks = mne.pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False,
exclude='bads')
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=False,
picks=picks, baseline=None, preload=True,
verbose=False)
X = epochs.get_data()
##############################################################################
# Transform data with PCA computed on the average ie evoked response
pca = UnsupervisedSpatialFilter(PCA(30), average=False)
pca_data = pca.fit_transform(X)
ev = mne.EvokedArray(np.mean(pca_data, axis=0),
mne.create_info(30, epochs.info['sfreq'],
ch_types='eeg'), tmin=tmin)
ev.plot(show=False, window_title="PCA", time_unit='s')
##############################################################################
# Transform data with ICA computed on the raw epochs (no averaging)
ica = UnsupervisedSpatialFilter(FastICA(30), average=False)
ica_data = ica.fit_transform(X)
ev1 = mne.EvokedArray(np.mean(ica_data, axis=0),
mne.create_info(30, epochs.info['sfreq'],
ch_types='eeg'), tmin=tmin)
ev1.plot(show=False, window_title='ICA', time_unit='s')
plt.show()
| bsd-3-clause |
walterreade/scikit-learn | benchmarks/bench_isotonic.py | 38 | 3047 | """
Benchmarks of isotonic regression performance.
We generate a synthetic dataset of size 10^n, for n in [min, max], and
examine the time taken to run isotonic regression over the dataset.
The timings are then output to stdout, or visualized on a log-log scale
with matplotlib.
This allows the scaling of the algorithm with the problem size to be
visualized and understood.
"""
from __future__ import print_function
import numpy as np
import gc
from datetime import datetime
from sklearn.isotonic import isotonic_regression
from sklearn.utils.bench import total_seconds
import matplotlib.pyplot as plt
import argparse
def generate_perturbed_logarithm_dataset(size):
return np.random.randint(-50, 50, size=n) \
+ 50. * np.log(1 + np.arange(n))
def generate_logistic_dataset(size):
X = np.sort(np.random.normal(size=size))
return np.random.random(size=size) < 1.0 / (1.0 + np.exp(-X))
DATASET_GENERATORS = {
'perturbed_logarithm': generate_perturbed_logarithm_dataset,
'logistic': generate_logistic_dataset
}
def bench_isotonic_regression(Y):
"""
Runs a single iteration of isotonic regression on the input data,
and reports the total time taken (in seconds).
"""
gc.collect()
tstart = datetime.now()
isotonic_regression(Y)
delta = datetime.now() - tstart
return total_seconds(delta)
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description="Isotonic Regression benchmark tool")
parser.add_argument('--iterations', type=int, required=True,
help="Number of iterations to average timings over "
"for each problem size")
parser.add_argument('--log_min_problem_size', type=int, required=True,
help="Base 10 logarithm of the minimum problem size")
parser.add_argument('--log_max_problem_size', type=int, required=True,
help="Base 10 logarithm of the maximum problem size")
parser.add_argument('--show_plot', action='store_true',
help="Plot timing output with matplotlib")
parser.add_argument('--dataset', choices=DATASET_GENERATORS.keys(),
required=True)
args = parser.parse_args()
timings = []
for exponent in range(args.log_min_problem_size,
args.log_max_problem_size):
n = 10 ** exponent
Y = DATASET_GENERATORS[args.dataset](n)
time_per_iteration = \
[bench_isotonic_regression(Y) for i in range(args.iterations)]
timing = (n, np.mean(time_per_iteration))
timings.append(timing)
# If we're not plotting, dump the timing to stdout
if not args.show_plot:
print(n, np.mean(time_per_iteration))
if args.show_plot:
plt.plot(*zip(*timings))
plt.title("Average time taken running isotonic regression")
plt.xlabel('Number of observations')
plt.ylabel('Time (s)')
plt.axis('tight')
plt.loglog()
plt.show()
| bsd-3-clause |
krbeverx/Firmware | Tools/process_sensor_caldata.py | 2 | 48768 | #! /usr/bin/env python
from __future__ import print_function
import argparse
import os
import math
import matplotlib.pyplot as plt
import numpy as np
from pyulog import *
"""
Reads in IMU data from a static thermal calibration test and performs a curve fit of gyro, accel and baro bias vs temperature
Data can be gathered using the following sequence:
1) Power up the board and set the TC_A_ENABLE, TC_B_ENABLE and TC_G_ENABLE parameters to 1
2) Set all CAL_GYR and CAL_ACC parameters to defaults
3) Set the parameter SDLOG_MODE to 2, and SDLOG_PROFILE "Thermal calibration" bit (2) to enable logging of sensor data for calibration and power off
4) Cold soak the board for 30 minutes
5) Move to a warm dry, still air, constant pressure environment.
6) Apply power for 45 minutes, keeping the board still.
7) Remove power and extract the .ulog file
8) Open a terminal window in the Firmware/Tools directory and run the python calibration script script file: 'python process_sensor_caldata.py <full path name to .ulog file>
9) Power the board, connect QGC and load the parameter from the generated .params file onto the board using QGC. Due to the number of parameters, loading them may take some time.
10) TODO - we need a way for user to reliably tell when parameters have all been changed and saved.
11) After parameters have finished loading, set SDLOG_MODE and SDLOG_PROFILE to their respective values prior to step 4) and remove power.
12) Power the board and perform a normal gyro and accelerometer sensor calibration using QGC. The board must be repowered after this step before flying due to large parameter changes and the thermal compensation parameters only being read on startup.
Outputs thermal compensation parameters in a file named <inputfilename>.params which can be loaded onto the board using QGroundControl
Outputs summary plots in a pdf file named <inputfilename>.pdf
"""
def resampleWithDeltaX(x,y):
xMin = np.amin(x)
xMax = np.amax(x)
nbInterval = 2000
interval = (xMax-xMin)/nbInterval
resampledY = np.zeros(nbInterval)
resampledX = np.zeros(nbInterval)
resampledCount = np.zeros(nbInterval)
for idx in range(0,len(x)):
if x[idx]<xMin:
binIdx = 0
elif x[idx]<xMax:
binIdx = int((x[idx]-xMin)/(interval))
else:
binIdx = nbInterval-1
resampledY[binIdx] += y[idx]
resampledX[binIdx] += x[idx]
resampledCount[binIdx] += 1
idxNotEmpty = np.where(resampledCount != 0)
resampledCount = resampledCount[idxNotEmpty]
resampledY = resampledY[idxNotEmpty]
resampledX = resampledX[idxNotEmpty]
resampledY /= resampledCount
resampledX /= resampledCount
return resampledX,resampledY
parser = argparse.ArgumentParser(description='Reads in IMU data from a static thermal calibration test and performs a curve fit of gyro, accel and baro bias vs temperature')
parser.add_argument('filename', metavar='file.ulg', help='ULog input file')
parser.add_argument('--no_resample', dest='noResample', action='store_const',
const=True, default=False, help='skip resampling and use raw data')
def is_valid_directory(parser, arg):
if os.path.isdir(arg):
# Directory exists so return the directory
return arg
else:
parser.error('The directory {} does not exist'.format(arg))
args = parser.parse_args()
ulog_file_name = args.filename
noResample = args.noResample
ulog = ULog(ulog_file_name, None)
data = ulog.data_list
# extract gyro data
num_gyros = 0
for d in data:
if d.name == 'sensor_gyro':
if d.multi_id == 0:
sensor_gyro_0 = d.data
print('found gyro 0 data')
num_gyros += 1
elif d.multi_id == 1:
sensor_gyro_1 = d.data
print('found gyro 1 data')
num_gyros += 1
elif d.multi_id == 2:
sensor_gyro_2 = d.data
print('found gyro 2 data')
num_gyros += 1
elif d.multi_id == 3:
sensor_gyro_3 = d.data
print('found gyro 3 data')
num_gyros += 1
# extract accel data
num_accels = 0
for d in data:
if d.name == 'sensor_accel':
if d.multi_id == 0:
sensor_accel_0 = d.data
print('found accel 0 data')
num_accels += 1
elif d.multi_id == 1:
sensor_accel_1 = d.data
print('found accel 1 data')
num_accels += 1
elif d.multi_id == 2:
sensor_accel_2 = d.data
print('found accel 2 data')
num_accels += 1
elif d.multi_id == 3:
sensor_accel_3 = d.data
print('found accel 3 data')
num_accels += 1
# extract baro data
num_baros = 0
for d in data:
if d.name == 'sensor_baro':
if d.multi_id == 0:
sensor_baro_0 = d.data
print('found baro 0 data')
num_baros += 1
elif d.multi_id == 1:
sensor_baro_1 = d.data
print('found baro 1 data')
num_baros += 1
elif d.multi_id == 2:
sensor_baro_2 = d.data
print('found baro 2 data')
num_baros += 1
elif d.multi_id == 3:
sensor_baro_3 = d.data
print('found baro 3 data')
num_baros += 1
# open file to save plots to PDF
from matplotlib.backends.backend_pdf import PdfPages
output_plot_filename = ulog_file_name + ".pdf"
pp = PdfPages(output_plot_filename)
#################################################################################
# define data dictionary of gyro 0 thermal correction parameters
gyro_0_params = {
'TC_G0_ID':0,
'TC_G0_TMIN':0.0,
'TC_G0_TMAX':0.0,
'TC_G0_TREF':0.0,
'TC_G0_X0_0':0.0,
'TC_G0_X1_0':0.0,
'TC_G0_X2_0':0.0,
'TC_G0_X3_0':0.0,
'TC_G0_X0_1':0.0,
'TC_G0_X1_1':0.0,
'TC_G0_X2_1':0.0,
'TC_G0_X3_1':0.0,
'TC_G0_X0_2':0.0,
'TC_G0_X1_2':0.0,
'TC_G0_X2_2':0.0,
'TC_G0_X3_2':0.0
}
# curve fit the data for gyro 0 corrections
if num_gyros >= 1 and not math.isnan(sensor_gyro_0['temperature'][0]):
gyro_0_params['TC_G0_ID'] = int(np.median(sensor_gyro_0['device_id']))
# find the min, max and reference temperature
gyro_0_params['TC_G0_TMIN'] = np.amin(sensor_gyro_0['temperature'])
gyro_0_params['TC_G0_TMAX'] = np.amax(sensor_gyro_0['temperature'])
gyro_0_params['TC_G0_TREF'] = 0.5 * (gyro_0_params['TC_G0_TMIN'] + gyro_0_params['TC_G0_TMAX'])
temp_rel = sensor_gyro_0['temperature'] - gyro_0_params['TC_G0_TREF']
temp_rel_resample = np.linspace(gyro_0_params['TC_G0_TMIN']-gyro_0_params['TC_G0_TREF'], gyro_0_params['TC_G0_TMAX']-gyro_0_params['TC_G0_TREF'], 100)
temp_resample = temp_rel_resample + gyro_0_params['TC_G0_TREF']
# fit X axis
if noResample:
coef_gyro_0_x = np.polyfit(temp_rel,sensor_gyro_0['x'],3)
else:
temp, sens = resampleWithDeltaX(temp_rel,sensor_gyro_0['x'])
coef_gyro_0_x = np.polyfit(temp, sens ,3)
gyro_0_params['TC_G0_X3_0'] = coef_gyro_0_x[0]
gyro_0_params['TC_G0_X2_0'] = coef_gyro_0_x[1]
gyro_0_params['TC_G0_X1_0'] = coef_gyro_0_x[2]
gyro_0_params['TC_G0_X0_0'] = coef_gyro_0_x[3]
fit_coef_gyro_0_x = np.poly1d(coef_gyro_0_x)
gyro_0_x_resample = fit_coef_gyro_0_x(temp_rel_resample)
# fit Y axis
if noResample:
coef_gyro_0_y = np.polyfit(temp_rel,sensor_gyro_0['y'],3)
else:
temp, sens = resampleWithDeltaX(temp_rel,sensor_gyro_0['y'])
coef_gyro_0_y = np.polyfit(temp, sens ,3)
gyro_0_params['TC_G0_X3_1'] = coef_gyro_0_y[0]
gyro_0_params['TC_G0_X2_1'] = coef_gyro_0_y[1]
gyro_0_params['TC_G0_X1_1'] = coef_gyro_0_y[2]
gyro_0_params['TC_G0_X0_1'] = coef_gyro_0_y[3]
fit_coef_gyro_0_y = np.poly1d(coef_gyro_0_y)
gyro_0_y_resample = fit_coef_gyro_0_y(temp_rel_resample)
# fit Z axis
if noResample:
coef_gyro_0_z = np.polyfit(temp_rel,sensor_gyro_0['z'],3)
else:
temp, sens = resampleWithDeltaX(temp_rel,sensor_gyro_0['z'])
coef_gyro_0_z = np.polyfit(temp, sens ,3)
gyro_0_params['TC_G0_X3_2'] = coef_gyro_0_z[0]
gyro_0_params['TC_G0_X2_2'] = coef_gyro_0_z[1]
gyro_0_params['TC_G0_X1_2'] = coef_gyro_0_z[2]
gyro_0_params['TC_G0_X0_2'] = coef_gyro_0_z[3]
fit_coef_gyro_0_z = np.poly1d(coef_gyro_0_z)
gyro_0_z_resample = fit_coef_gyro_0_z(temp_rel_resample)
# gyro0 vs temperature
plt.figure(1,figsize=(20,13))
# draw plots
plt.subplot(3,1,1)
plt.plot(sensor_gyro_0['temperature'],sensor_gyro_0['x'],'b')
plt.plot(temp_resample,gyro_0_x_resample,'r')
plt.title('Gyro 0 ({}) Bias vs Temperature'.format(gyro_0_params['TC_G0_ID']))
plt.ylabel('X bias (rad/s)')
plt.xlabel('temperature (degC)')
plt.grid()
# draw plots
plt.subplot(3,1,2)
plt.plot(sensor_gyro_0['temperature'],sensor_gyro_0['y'],'b')
plt.plot(temp_resample,gyro_0_y_resample,'r')
plt.ylabel('Y bias (rad/s)')
plt.xlabel('temperature (degC)')
plt.grid()
# draw plots
plt.subplot(3,1,3)
plt.plot(sensor_gyro_0['temperature'],sensor_gyro_0['z'],'b')
plt.plot(temp_resample,gyro_0_z_resample,'r')
plt.ylabel('Z bias (rad/s)')
plt.xlabel('temperature (degC)')
plt.grid()
pp.savefig()
#################################################################################
#################################################################################
# define data dictionary of gyro 1 thermal correction parameters
gyro_1_params = {
'TC_G1_ID':0,
'TC_G1_TMIN':0.0,
'TC_G1_TMAX':0.0,
'TC_G1_TREF':0.0,
'TC_G1_X0_0':0.0,
'TC_G1_X1_0':0.0,
'TC_G1_X2_0':0.0,
'TC_G1_X3_0':0.0,
'TC_G1_X0_1':0.0,
'TC_G1_X1_1':0.0,
'TC_G1_X2_1':0.0,
'TC_G1_X3_1':0.0,
'TC_G1_X0_2':0.0,
'TC_G1_X1_2':0.0,
'TC_G1_X2_2':0.0,
'TC_G1_X3_2':0.0
}
# curve fit the data for gyro 1 corrections
if num_gyros >= 2 and not math.isnan(sensor_gyro_1['temperature'][0]):
gyro_1_params['TC_G1_ID'] = int(np.median(sensor_gyro_1['device_id']))
# find the min, max and reference temperature
gyro_1_params['TC_G1_TMIN'] = np.amin(sensor_gyro_1['temperature'])
gyro_1_params['TC_G1_TMAX'] = np.amax(sensor_gyro_1['temperature'])
gyro_1_params['TC_G1_TREF'] = 0.5 * (gyro_1_params['TC_G1_TMIN'] + gyro_1_params['TC_G1_TMAX'])
temp_rel = sensor_gyro_1['temperature'] - gyro_1_params['TC_G1_TREF']
temp_rel_resample = np.linspace(gyro_1_params['TC_G1_TMIN']-gyro_1_params['TC_G1_TREF'], gyro_1_params['TC_G1_TMAX']-gyro_1_params['TC_G1_TREF'], 100)
temp_resample = temp_rel_resample + gyro_1_params['TC_G1_TREF']
# fit X axis
if noResample:
coef_gyro_1_x = np.polyfit(temp_rel,sensor_gyro_1['x'],3)
else:
temp, sens = resampleWithDeltaX(temp_rel,sensor_gyro_1['x'])
coef_gyro_1_x = np.polyfit(temp, sens ,3)
gyro_1_params['TC_G1_X3_0'] = coef_gyro_1_x[0]
gyro_1_params['TC_G1_X2_0'] = coef_gyro_1_x[1]
gyro_1_params['TC_G1_X1_0'] = coef_gyro_1_x[2]
gyro_1_params['TC_G1_X0_0'] = coef_gyro_1_x[3]
fit_coef_gyro_1_x = np.poly1d(coef_gyro_1_x)
gyro_1_x_resample = fit_coef_gyro_1_x(temp_rel_resample)
# fit Y axis
if noResample:
coef_gyro_1_y = np.polyfit(temp_rel,sensor_gyro_1['y'],3)
else:
temp, sens = resampleWithDeltaX(temp_rel,sensor_gyro_1['y'])
coef_gyro_1_y = np.polyfit(temp, sens ,3)
gyro_1_params['TC_G1_X3_1'] = coef_gyro_1_y[0]
gyro_1_params['TC_G1_X2_1'] = coef_gyro_1_y[1]
gyro_1_params['TC_G1_X1_1'] = coef_gyro_1_y[2]
gyro_1_params['TC_G1_X0_1'] = coef_gyro_1_y[3]
fit_coef_gyro_1_y = np.poly1d(coef_gyro_1_y)
gyro_1_y_resample = fit_coef_gyro_1_y(temp_rel_resample)
# fit Z axis
if noResample:
coef_gyro_1_z = np.polyfit(temp_rel,sensor_gyro_1['z'],3)
else:
temp, sens = resampleWithDeltaX(temp_rel,sensor_gyro_1['z'])
coef_gyro_1_z = np.polyfit(temp, sens ,3)
gyro_1_params['TC_G1_X3_2'] = coef_gyro_1_z[0]
gyro_1_params['TC_G1_X2_2'] = coef_gyro_1_z[1]
gyro_1_params['TC_G1_X1_2'] = coef_gyro_1_z[2]
gyro_1_params['TC_G1_X0_2'] = coef_gyro_1_z[3]
fit_coef_gyro_1_z = np.poly1d(coef_gyro_1_z)
gyro_1_z_resample = fit_coef_gyro_1_z(temp_rel_resample)
# gyro1 vs temperature
plt.figure(2,figsize=(20,13))
# draw plots
plt.subplot(3,1,1)
plt.plot(sensor_gyro_1['temperature'],sensor_gyro_1['x'],'b')
plt.plot(temp_resample,gyro_1_x_resample,'r')
plt.title('Gyro 1 ({}) Bias vs Temperature'.format(gyro_1_params['TC_G1_ID']))
plt.ylabel('X bias (rad/s)')
plt.xlabel('temperature (degC)')
plt.grid()
# draw plots
plt.subplot(3,1,2)
plt.plot(sensor_gyro_1['temperature'],sensor_gyro_1['y'],'b')
plt.plot(temp_resample,gyro_1_y_resample,'r')
plt.ylabel('Y bias (rad/s)')
plt.xlabel('temperature (degC)')
plt.grid()
# draw plots
plt.subplot(3,1,3)
plt.plot(sensor_gyro_1['temperature'],sensor_gyro_1['z'],'b')
plt.plot(temp_resample,gyro_1_z_resample,'r')
plt.ylabel('Z bias (rad/s)')
plt.xlabel('temperature (degC)')
plt.grid()
pp.savefig()
#################################################################################
#################################################################################
# define data dictionary of gyro 2 thermal correction parameters
gyro_2_params = {
'TC_G2_ID':0,
'TC_G2_TMIN':0.0,
'TC_G2_TMAX':0.0,
'TC_G2_TREF':0.0,
'TC_G2_X0_0':0.0,
'TC_G2_X1_0':0.0,
'TC_G2_X2_0':0.0,
'TC_G2_X3_0':0.0,
'TC_G2_X0_1':0.0,
'TC_G2_X1_1':0.0,
'TC_G2_X2_1':0.0,
'TC_G2_X3_1':0.0,
'TC_G2_X0_2':0.0,
'TC_G2_X1_2':0.0,
'TC_G2_X2_2':0.0,
'TC_G2_X3_2':0.0
}
# curve fit the data for gyro 2 corrections
if num_gyros >= 3 and not math.isnan(sensor_gyro_2['temperature'][0]):
gyro_2_params['TC_G2_ID'] = int(np.median(sensor_gyro_2['device_id']))
# find the min, max and reference temperature
gyro_2_params['TC_G2_TMIN'] = np.amin(sensor_gyro_2['temperature'])
gyro_2_params['TC_G2_TMAX'] = np.amax(sensor_gyro_2['temperature'])
gyro_2_params['TC_G2_TREF'] = 0.5 * (gyro_2_params['TC_G2_TMIN'] + gyro_2_params['TC_G2_TMAX'])
temp_rel = sensor_gyro_2['temperature'] - gyro_2_params['TC_G2_TREF']
temp_rel_resample = np.linspace(gyro_2_params['TC_G2_TMIN']-gyro_2_params['TC_G2_TREF'], gyro_2_params['TC_G2_TMAX']-gyro_2_params['TC_G2_TREF'], 100)
temp_resample = temp_rel_resample + gyro_2_params['TC_G2_TREF']
# fit X axis
if noResample:
coef_gyro_2_x = np.polyfit(temp_rel,sensor_gyro_2['x'],3)
else:
temp, sens = resampleWithDeltaX(temp_rel,sensor_gyro_2['x'])
coef_gyro_2_x = np.polyfit(temp, sens ,3)
gyro_2_params['TC_G2_X3_0'] = coef_gyro_2_x[0]
gyro_2_params['TC_G2_X2_0'] = coef_gyro_2_x[1]
gyro_2_params['TC_G2_X1_0'] = coef_gyro_2_x[2]
gyro_2_params['TC_G2_X0_0'] = coef_gyro_2_x[3]
fit_coef_gyro_2_x = np.poly1d(coef_gyro_2_x)
gyro_2_x_resample = fit_coef_gyro_2_x(temp_rel_resample)
# fit Y axis
if noResample:
coef_gyro_2_y = np.polyfit(temp_rel,sensor_gyro_2['y'],3)
else:
temp, sens = resampleWithDeltaX(temp_rel,sensor_gyro_2['y'])
coef_gyro_2_y = np.polyfit(temp, sens ,3)
gyro_2_params['TC_G2_X3_1'] = coef_gyro_2_y[0]
gyro_2_params['TC_G2_X2_1'] = coef_gyro_2_y[1]
gyro_2_params['TC_G2_X1_1'] = coef_gyro_2_y[2]
gyro_2_params['TC_G2_X0_1'] = coef_gyro_2_y[3]
fit_coef_gyro_2_y = np.poly1d(coef_gyro_2_y)
gyro_2_y_resample = fit_coef_gyro_2_y(temp_rel_resample)
# fit Z axis
if noResample:
coef_gyro_2_z = np.polyfit(temp_rel,sensor_gyro_2['z'],3)
else:
temp, sens = resampleWithDeltaX(temp_rel,sensor_gyro_2['z'])
coef_gyro_2_z = np.polyfit(temp, sens ,3)
gyro_2_params['TC_G2_X3_2'] = coef_gyro_2_z[0]
gyro_2_params['TC_G2_X2_2'] = coef_gyro_2_z[1]
gyro_2_params['TC_G2_X1_2'] = coef_gyro_2_z[2]
gyro_2_params['TC_G2_X0_2'] = coef_gyro_2_z[3]
fit_coef_gyro_2_z = np.poly1d(coef_gyro_2_z)
gyro_2_z_resample = fit_coef_gyro_2_z(temp_rel_resample)
# gyro2 vs temperature
plt.figure(3,figsize=(20,13))
# draw plots
plt.subplot(3,1,1)
plt.plot(sensor_gyro_2['temperature'],sensor_gyro_2['x'],'b')
plt.plot(temp_resample,gyro_2_x_resample,'r')
plt.title('Gyro 2 ({}) Bias vs Temperature'.format(gyro_2_params['TC_G2_ID']))
plt.ylabel('X bias (rad/s)')
plt.xlabel('temperature (degC)')
plt.grid()
# draw plots
plt.subplot(3,1,2)
plt.plot(sensor_gyro_2['temperature'],sensor_gyro_2['y'],'b')
plt.plot(temp_resample,gyro_2_y_resample,'r')
plt.ylabel('Y bias (rad/s)')
plt.xlabel('temperature (degC)')
plt.grid()
# draw plots
plt.subplot(3,1,3)
plt.plot(sensor_gyro_2['temperature'],sensor_gyro_2['z'],'b')
plt.plot(temp_resample,gyro_2_z_resample,'r')
plt.ylabel('Z bias (rad/s)')
plt.xlabel('temperature (degC)')
plt.grid()
pp.savefig()
#################################################################################
#################################################################################
# define data dictionary of gyro 3 thermal correction parameters
gyro_3_params = {
'TC_G3_ID':0,
'TC_G3_TMIN':0.0,
'TC_G3_TMAX':0.0,
'TC_G3_TREF':0.0,
'TC_G3_X0_0':0.0,
'TC_G3_X1_0':0.0,
'TC_G3_X2_0':0.0,
'TC_G3_X3_0':0.0,
'TC_G3_X0_1':0.0,
'TC_G3_X1_1':0.0,
'TC_G3_X2_1':0.0,
'TC_G3_X3_1':0.0,
'TC_G3_X0_2':0.0,
'TC_G3_X1_2':0.0,
'TC_G3_X2_2':0.0,
'TC_G3_X3_2':0.0
}
# curve fit the data for gyro 3 corrections
if num_gyros >= 4 and not math.isnan(sensor_gyro_3['temperature'][0]):
gyro_3_params['TC_G3_ID'] = int(np.median(sensor_gyro_3['device_id']))
# find the min, max and reference temperature
gyro_3_params['TC_G3_TMIN'] = np.amin(sensor_gyro_3['temperature'])
gyro_3_params['TC_G3_TMAX'] = np.amax(sensor_gyro_3['temperature'])
gyro_3_params['TC_G3_TREF'] = 0.5 * (gyro_3_params['TC_G3_TMIN'] + gyro_3_params['TC_G3_TMAX'])
temp_rel = sensor_gyro_3['temperature'] - gyro_3_params['TC_G3_TREF']
temp_rel_resample = np.linspace(gyro_3_params['TC_G3_TMIN']-gyro_3_params['TC_G3_TREF'], gyro_3_params['TC_G3_TMAX']-gyro_3_params['TC_G3_TREF'], 100)
temp_resample = temp_rel_resample + gyro_3_params['TC_G3_TREF']
# fit X axis
coef_gyro_3_x = np.polyfit(temp_rel,sensor_gyro_3['x'],3)
gyro_3_params['TC_G3_X3_0'] = coef_gyro_3_x[0]
gyro_3_params['TC_G3_X2_0'] = coef_gyro_3_x[1]
gyro_3_params['TC_G3_X1_0'] = coef_gyro_3_x[2]
gyro_3_params['TC_G3_X0_0'] = coef_gyro_3_x[3]
fit_coef_gyro_3_x = np.poly1d(coef_gyro_3_x)
gyro_3_x_resample = fit_coef_gyro_3_x(temp_rel_resample)
# fit Y axis
coef_gyro_3_y = np.polyfit(temp_rel,sensor_gyro_3['y'],3)
gyro_3_params['TC_G3_X3_1'] = coef_gyro_3_y[0]
gyro_3_params['TC_G3_X2_1'] = coef_gyro_3_y[1]
gyro_3_params['TC_G3_X1_1'] = coef_gyro_3_y[2]
gyro_3_params['TC_G3_X0_1'] = coef_gyro_3_y[3]
fit_coef_gyro_3_y = np.poly1d(coef_gyro_3_y)
gyro_3_y_resample = fit_coef_gyro_3_y(temp_rel_resample)
# fit Z axis
coef_gyro_3_z = np.polyfit(temp_rel,sensor_gyro_3['z'],3)
gyro_3_params['TC_G3_X3_2'] = coef_gyro_3_z[0]
gyro_3_params['TC_G3_X2_2'] = coef_gyro_3_z[1]
gyro_3_params['TC_G3_X1_2'] = coef_gyro_3_z[2]
gyro_3_params['TC_G3_X0_2'] = coef_gyro_3_z[3]
fit_coef_gyro_3_z = np.poly1d(coef_gyro_3_z)
gyro_3_z_resample = fit_coef_gyro_3_z(temp_rel_resample)
# gyro3 vs temperature
plt.figure(4,figsize=(20,13))
# draw plots
plt.subplot(3,1,1)
plt.plot(sensor_gyro_3['temperature'],sensor_gyro_3['x'],'b')
plt.plot(temp_resample,gyro_3_x_resample,'r')
plt.title('Gyro 2 ({}) Bias vs Temperature'.format(gyro_3_params['TC_G3_ID']))
plt.ylabel('X bias (rad/s)')
plt.xlabel('temperature (degC)')
plt.grid()
# draw plots
plt.subplot(3,1,2)
plt.plot(sensor_gyro_3['temperature'],sensor_gyro_3['y'],'b')
plt.plot(temp_resample,gyro_3_y_resample,'r')
plt.ylabel('Y bias (rad/s)')
plt.xlabel('temperature (degC)')
plt.grid()
# draw plots
plt.subplot(3,1,3)
plt.plot(sensor_gyro_3['temperature'],sensor_gyro_3['z'],'b')
plt.plot(temp_resample,gyro_3_z_resample,'r')
plt.ylabel('Z bias (rad/s)')
plt.xlabel('temperature (degC)')
plt.grid()
pp.savefig()
#################################################################################
#################################################################################
# define data dictionary of accel 0 thermal correction parameters
accel_0_params = {
'TC_A0_ID':0,
'TC_A0_TMIN':0.0,
'TC_A0_TMAX':0.0,
'TC_A0_TREF':0.0,
'TC_A0_X0_0':0.0,
'TC_A0_X1_0':0.0,
'TC_A0_X2_0':0.0,
'TC_A0_X3_0':0.0,
'TC_A0_X0_1':0.0,
'TC_A0_X1_1':0.0,
'TC_A0_X2_1':0.0,
'TC_A0_X3_1':0.0,
'TC_A0_X0_2':0.0,
'TC_A0_X1_2':0.0,
'TC_A0_X2_2':0.0,
'TC_A0_X3_2':0.0
}
# curve fit the data for accel 0 corrections
if num_accels >= 1 and not math.isnan(sensor_accel_0['temperature'][0]):
accel_0_params['TC_A0_ID'] = int(np.median(sensor_accel_0['device_id']))
# find the min, max and reference temperature
accel_0_params['TC_A0_TMIN'] = np.amin(sensor_accel_0['temperature'])
accel_0_params['TC_A0_TMAX'] = np.amax(sensor_accel_0['temperature'])
accel_0_params['TC_A0_TREF'] = 0.5 * (accel_0_params['TC_A0_TMIN'] + accel_0_params['TC_A0_TMAX'])
temp_rel = sensor_accel_0['temperature'] - accel_0_params['TC_A0_TREF']
temp_rel_resample = np.linspace(accel_0_params['TC_A0_TMIN']-accel_0_params['TC_A0_TREF'], accel_0_params['TC_A0_TMAX']-accel_0_params['TC_A0_TREF'], 100)
temp_resample = temp_rel_resample + accel_0_params['TC_A0_TREF']
# fit X axis
correction_x = sensor_accel_0['x'] - np.median(sensor_accel_0['x'])
if noResample:
coef_accel_0_x = np.polyfit(temp_rel,correction_x,3)
else:
temp, sens = resampleWithDeltaX(temp_rel,correction_x)
coef_accel_0_x = np.polyfit(temp, sens ,3)
accel_0_params['TC_A0_X3_0'] = coef_accel_0_x[0]
accel_0_params['TC_A0_X2_0'] = coef_accel_0_x[1]
accel_0_params['TC_A0_X1_0'] = coef_accel_0_x[2]
accel_0_params['TC_A0_X0_0'] = coef_accel_0_x[3]
fit_coef_accel_0_x = np.poly1d(coef_accel_0_x)
correction_x_resample = fit_coef_accel_0_x(temp_rel_resample)
# fit Y axis
correction_y = sensor_accel_0['y']-np.median(sensor_accel_0['y'])
if noResample:
coef_accel_0_y = np.polyfit(temp_rel,correction_y,3)
else:
temp, sens = resampleWithDeltaX(temp_rel,correction_y)
coef_accel_0_y = np.polyfit(temp, sens ,3)
accel_0_params['TC_A0_X3_1'] = coef_accel_0_y[0]
accel_0_params['TC_A0_X2_1'] = coef_accel_0_y[1]
accel_0_params['TC_A0_X1_1'] = coef_accel_0_y[2]
accel_0_params['TC_A0_X0_1'] = coef_accel_0_y[3]
fit_coef_accel_0_y = np.poly1d(coef_accel_0_y)
correction_y_resample = fit_coef_accel_0_y(temp_rel_resample)
# fit Z axis
correction_z = sensor_accel_0['z']-np.median(sensor_accel_0['z'])
if noResample:
coef_accel_0_z = np.polyfit(temp_rel,correction_z,3)
else:
temp, sens = resampleWithDeltaX(temp_rel,correction_z)
coef_accel_0_z = np.polyfit(temp, sens ,3)
accel_0_params['TC_A0_X3_2'] = coef_accel_0_z[0]
accel_0_params['TC_A0_X2_2'] = coef_accel_0_z[1]
accel_0_params['TC_A0_X1_2'] = coef_accel_0_z[2]
accel_0_params['TC_A0_X0_2'] = coef_accel_0_z[3]
fit_coef_accel_0_z = np.poly1d(coef_accel_0_z)
correction_z_resample = fit_coef_accel_0_z(temp_rel_resample)
# accel 0 vs temperature
plt.figure(5,figsize=(20,13))
# draw plots
plt.subplot(3,1,1)
plt.plot(sensor_accel_0['temperature'],correction_x,'b')
plt.plot(temp_resample,correction_x_resample,'r')
plt.title('Accel 0 ({}) Bias vs Temperature'.format(accel_0_params['TC_A0_ID']))
plt.ylabel('X bias (m/s/s)')
plt.xlabel('temperature (degC)')
plt.grid()
# draw plots
plt.subplot(3,1,2)
plt.plot(sensor_accel_0['temperature'],correction_y,'b')
plt.plot(temp_resample,correction_y_resample,'r')
plt.ylabel('Y bias (m/s/s)')
plt.xlabel('temperature (degC)')
plt.grid()
# draw plots
plt.subplot(3,1,3)
plt.plot(sensor_accel_0['temperature'],correction_z,'b')
plt.plot(temp_resample,correction_z_resample,'r')
plt.ylabel('Z bias (m/s/s)')
plt.xlabel('temperature (degC)')
plt.grid()
pp.savefig()
#################################################################################
#################################################################################
# define data dictionary of accel 1 thermal correction parameters
accel_1_params = {
'TC_A1_ID':0,
'TC_A1_TMIN':0.0,
'TC_A1_TMAX':0.0,
'TC_A1_TREF':0.0,
'TC_A1_X0_0':0.0,
'TC_A1_X1_0':0.0,
'TC_A1_X2_0':0.0,
'TC_A1_X3_0':0.0,
'TC_A1_X0_1':0.0,
'TC_A1_X1_1':0.0,
'TC_A1_X2_1':0.0,
'TC_A1_X3_1':0.0,
'TC_A1_X0_2':0.0,
'TC_A1_X1_2':0.0,
'TC_A1_X2_2':0.0,
'TC_A1_X3_2':0.0
}
# curve fit the data for accel 1 corrections
if num_accels >= 2 and not math.isnan(sensor_accel_1['temperature'][0]):
accel_1_params['TC_A1_ID'] = int(np.median(sensor_accel_1['device_id']))
# find the min, max and reference temperature
accel_1_params['TC_A1_TMIN'] = np.amin(sensor_accel_1['temperature'])
accel_1_params['TC_A1_TMAX'] = np.amax(sensor_accel_1['temperature'])
accel_1_params['TC_A1_TREF'] = 0.5 * (accel_1_params['TC_A1_TMIN'] + accel_1_params['TC_A1_TMAX'])
temp_rel = sensor_accel_1['temperature'] - accel_1_params['TC_A1_TREF']
temp_rel_resample = np.linspace(accel_1_params['TC_A1_TMIN']-accel_1_params['TC_A1_TREF'], accel_1_params['TC_A1_TMAX']-accel_1_params['TC_A1_TREF'], 100)
temp_resample = temp_rel_resample + accel_1_params['TC_A1_TREF']
# fit X axis
correction_x = sensor_accel_1['x']-np.median(sensor_accel_1['x'])
if noResample:
coef_accel_1_x = np.polyfit(temp_rel,correction_x,3)
else:
temp, sens = resampleWithDeltaX(temp_rel,correction_x)
coef_accel_1_x = np.polyfit(temp, sens ,3)
accel_1_params['TC_A1_X3_0'] = coef_accel_1_x[0]
accel_1_params['TC_A1_X2_0'] = coef_accel_1_x[1]
accel_1_params['TC_A1_X1_0'] = coef_accel_1_x[2]
accel_1_params['TC_A1_X0_0'] = coef_accel_1_x[3]
fit_coef_accel_1_x = np.poly1d(coef_accel_1_x)
correction_x_resample = fit_coef_accel_1_x(temp_rel_resample)
# fit Y axis
correction_y = sensor_accel_1['y']-np.median(sensor_accel_1['y'])
if noResample:
coef_accel_1_y = np.polyfit(temp_rel,correction_y,3)
else:
temp, sens = resampleWithDeltaX(temp_rel,correction_y)
coef_accel_1_y = np.polyfit(temp, sens ,3)
accel_1_params['TC_A1_X3_1'] = coef_accel_1_y[0]
accel_1_params['TC_A1_X2_1'] = coef_accel_1_y[1]
accel_1_params['TC_A1_X1_1'] = coef_accel_1_y[2]
accel_1_params['TC_A1_X0_1'] = coef_accel_1_y[3]
fit_coef_accel_1_y = np.poly1d(coef_accel_1_y)
correction_y_resample = fit_coef_accel_1_y(temp_rel_resample)
# fit Z axis
correction_z = (sensor_accel_1['z'])-np.median(sensor_accel_1['z'])
if noResample:
coef_accel_1_z = np.polyfit(temp_rel,correction_z,3)
else:
temp, sens = resampleWithDeltaX(temp_rel,correction_z)
coef_accel_1_z = np.polyfit(temp, sens ,3)
accel_1_params['TC_A1_X3_2'] = coef_accel_1_z[0]
accel_1_params['TC_A1_X2_2'] = coef_accel_1_z[1]
accel_1_params['TC_A1_X1_2'] = coef_accel_1_z[2]
accel_1_params['TC_A1_X0_2'] = coef_accel_1_z[3]
fit_coef_accel_1_z = np.poly1d(coef_accel_1_z)
correction_z_resample = fit_coef_accel_1_z(temp_rel_resample)
# accel 1 vs temperature
plt.figure(6,figsize=(20,13))
# draw plots
plt.subplot(3,1,1)
plt.plot(sensor_accel_1['temperature'],correction_x,'b')
plt.plot(temp_resample,correction_x_resample,'r')
plt.title('Accel 1 ({}) Bias vs Temperature'.format(accel_1_params['TC_A1_ID']))
plt.ylabel('X bias (m/s/s)')
plt.xlabel('temperature (degC)')
plt.grid()
# draw plots
plt.subplot(3,1,2)
plt.plot(sensor_accel_1['temperature'],correction_y,'b')
plt.plot(temp_resample,correction_y_resample,'r')
plt.ylabel('Y bias (m/s/s)')
plt.xlabel('temperature (degC)')
plt.grid()
# draw plots
plt.subplot(3,1,3)
plt.plot(sensor_accel_1['temperature'],correction_z,'b')
plt.plot(temp_resample,correction_z_resample,'r')
plt.ylabel('Z bias (m/s/s)')
plt.xlabel('temperature (degC)')
plt.grid()
pp.savefig()
#################################################################################
#################################################################################
# define data dictionary of accel 2 thermal correction parameters
accel_2_params = {
'TC_A2_ID':0,
'TC_A2_TMIN':0.0,
'TC_A2_TMAX':0.0,
'TC_A2_TREF':0.0,
'TC_A2_X0_0':0.0,
'TC_A2_X1_0':0.0,
'TC_A2_X2_0':0.0,
'TC_A2_X3_0':0.0,
'TC_A2_X0_1':0.0,
'TC_A2_X1_1':0.0,
'TC_A2_X2_1':0.0,
'TC_A2_X3_1':0.0,
'TC_A2_X0_2':0.0,
'TC_A2_X1_2':0.0,
'TC_A2_X2_2':0.0,
'TC_A2_X3_2':0.0
}
# curve fit the data for accel 2 corrections
if num_accels >= 3 and not math.isnan(sensor_accel_2['temperature'][0]):
accel_2_params['TC_A2_ID'] = int(np.median(sensor_accel_2['device_id']))
# find the min, max and reference temperature
accel_2_params['TC_A2_TMIN'] = np.amin(sensor_accel_2['temperature'])
accel_2_params['TC_A2_TMAX'] = np.amax(sensor_accel_2['temperature'])
accel_2_params['TC_A2_TREF'] = 0.5 * (accel_2_params['TC_A2_TMIN'] + accel_2_params['TC_A2_TMAX'])
temp_rel = sensor_accel_2['temperature'] - accel_2_params['TC_A2_TREF']
temp_rel_resample = np.linspace(accel_2_params['TC_A2_TMIN']-accel_2_params['TC_A2_TREF'], accel_2_params['TC_A2_TMAX']-accel_2_params['TC_A2_TREF'], 100)
temp_resample = temp_rel_resample + accel_2_params['TC_A2_TREF']
# fit X axis
correction_x = sensor_accel_2['x']-np.median(sensor_accel_2['x'])
if noResample:
coef_accel_2_x = np.polyfit(temp_rel,correction_x,3)
else:
temp, sens = resampleWithDeltaX(temp_rel,correction_x)
coef_accel_2_x = np.polyfit(temp, sens ,3)
accel_2_params['TC_A2_X3_0'] = coef_accel_2_x[0]
accel_2_params['TC_A2_X2_0'] = coef_accel_2_x[1]
accel_2_params['TC_A2_X1_0'] = coef_accel_2_x[2]
accel_2_params['TC_A2_X0_0'] = coef_accel_2_x[3]
fit_coef_accel_2_x = np.poly1d(coef_accel_2_x)
correction_x_resample = fit_coef_accel_2_x(temp_rel_resample)
# fit Y axis
correction_y = sensor_accel_2['y']-np.median(sensor_accel_2['y'])
if noResample:
coef_accel_2_y = np.polyfit(temp_rel,correction_y,3)
else:
temp, sens = resampleWithDeltaX(temp_rel,correction_y)
coef_accel_2_y = np.polyfit(temp, sens ,3)
accel_2_params['TC_A2_X3_1'] = coef_accel_2_y[0]
accel_2_params['TC_A2_X2_1'] = coef_accel_2_y[1]
accel_2_params['TC_A2_X1_1'] = coef_accel_2_y[2]
accel_2_params['TC_A2_X0_1'] = coef_accel_2_y[3]
fit_coef_accel_2_y = np.poly1d(coef_accel_2_y)
correction_y_resample = fit_coef_accel_2_y(temp_rel_resample)
# fit Z axis
correction_z = sensor_accel_2['z']-np.median(sensor_accel_2['z'])
if noResample:
coef_accel_2_z = np.polyfit(temp_rel,correction_z,3)
else:
temp, sens = resampleWithDeltaX(temp_rel,correction_z)
coef_accel_2_z = np.polyfit(temp, sens ,3)
accel_2_params['TC_A2_X3_2'] = coef_accel_2_z[0]
accel_2_params['TC_A2_X2_2'] = coef_accel_2_z[1]
accel_2_params['TC_A2_X1_2'] = coef_accel_2_z[2]
accel_2_params['TC_A2_X0_2'] = coef_accel_2_z[3]
fit_coef_accel_2_z = np.poly1d(coef_accel_2_z)
correction_z_resample = fit_coef_accel_2_z(temp_rel_resample)
# accel 2 vs temperature
plt.figure(7,figsize=(20,13))
# draw plots
plt.subplot(3,1,1)
plt.plot(sensor_accel_2['temperature'],correction_x,'b')
plt.plot(temp_resample,correction_x_resample,'r')
plt.title('Accel 2 ({}) Bias vs Temperature'.format(accel_2_params['TC_A2_ID']))
plt.ylabel('X bias (m/s/s)')
plt.xlabel('temperature (degC)')
plt.grid()
# draw plots
plt.subplot(3,1,2)
plt.plot(sensor_accel_2['temperature'],correction_y,'b')
plt.plot(temp_resample,correction_y_resample,'r')
plt.ylabel('Y bias (m/s/s)')
plt.xlabel('temperature (degC)')
plt.grid()
# draw plots
plt.subplot(3,1,3)
plt.plot(sensor_accel_2['temperature'],correction_z,'b')
plt.plot(temp_resample,correction_z_resample,'r')
plt.ylabel('Z bias (m/s/s)')
plt.xlabel('temperature (degC)')
plt.grid()
pp.savefig()
#################################################################################
#################################################################################
# define data dictionary of accel 3 thermal correction parameters
accel_3_params = {
'TC_A3_ID':0,
'TC_A3_TMIN':0.0,
'TC_A3_TMAX':0.0,
'TC_A3_TREF':0.0,
'TC_A3_X0_0':0.0,
'TC_A3_X1_0':0.0,
'TC_A3_X2_0':0.0,
'TC_A3_X3_0':0.0,
'TC_A3_X0_1':0.0,
'TC_A3_X1_1':0.0,
'TC_A3_X2_1':0.0,
'TC_A3_X3_1':0.0,
'TC_A3_X0_2':0.0,
'TC_A3_X1_2':0.0,
'TC_A3_X2_2':0.0,
'TC_A3_X3_2':0.0
}
# curve fit the data for accel 2 corrections
if num_accels >= 4 and not math.isnan(sensor_accel_3['temperature'][0]):
accel_3_params['TC_A3_ID'] = int(np.median(sensor_accel_3['device_id']))
# find the min, max and reference temperature
accel_3_params['TC_A3_TMIN'] = np.amin(sensor_accel_3['temperature'])
accel_3_params['TC_A3_TMAX'] = np.amax(sensor_accel_3['temperature'])
accel_3_params['TC_A3_TREF'] = 0.5 * (accel_3_params['TC_A3_TMIN'] + accel_3_params['TC_A3_TMAX'])
temp_rel = sensor_accel_3['temperature'] - accel_3_params['TC_A3_TREF']
temp_rel_resample = np.linspace(accel_3_params['TC_A3_TMIN']-accel_3_params['TC_A3_TREF'], accel_3_params['TC_A3_TMAX']-accel_3_params['TC_A3_TREF'], 100)
temp_resample = temp_rel_resample + accel_3_params['TC_A3_TREF']
# fit X axis
correction_x = sensor_accel_3['x']-np.median(sensor_accel_3['x'])
coef_accel_3_x = np.polyfit(temp_rel,correction_x,3)
accel_3_params['TC_A3_X3_0'] = coef_accel_3_x[0]
accel_3_params['TC_A3_X2_0'] = coef_accel_3_x[1]
accel_3_params['TC_A3_X1_0'] = coef_accel_3_x[2]
accel_3_params['TC_A3_X0_0'] = coef_accel_3_x[3]
fit_coef_accel_3_x = np.poly1d(coef_accel_3_x)
correction_x_resample = fit_coef_accel_3_x(temp_rel_resample)
# fit Y axis
correction_y = sensor_accel_3['y']-np.median(sensor_accel_3['y'])
coef_accel_3_y = np.polyfit(temp_rel,correction_y,3)
accel_3_params['TC_A3_X3_1'] = coef_accel_3_y[0]
accel_3_params['TC_A3_X2_1'] = coef_accel_3_y[1]
accel_3_params['TC_A3_X1_1'] = coef_accel_3_y[2]
accel_3_params['TC_A3_X0_1'] = coef_accel_3_y[3]
fit_coef_accel_3_y = np.poly1d(coef_accel_3_y)
correction_y_resample = fit_coef_accel_3_y(temp_rel_resample)
# fit Z axis
correction_z = sensor_accel_3['z']-np.median(sensor_accel_3['z'])
coef_accel_3_z = np.polyfit(temp_rel,correction_z,3)
accel_3_params['TC_A3_X3_2'] = coef_accel_3_z[0]
accel_3_params['TC_A3_X2_2'] = coef_accel_3_z[1]
accel_3_params['TC_A3_X1_2'] = coef_accel_3_z[2]
accel_3_params['TC_A3_X0_2'] = coef_accel_3_z[3]
fit_coef_accel_3_z = np.poly1d(coef_accel_3_z)
correction_z_resample = fit_coef_accel_3_z(temp_rel_resample)
# accel 3 vs temperature
plt.figure(8,figsize=(20,13))
# draw plots
plt.subplot(3,1,1)
plt.plot(sensor_accel_3['temperature'],correction_x,'b')
plt.plot(temp_resample,correction_x_resample,'r')
plt.title('Accel 3 ({}) Bias vs Temperature'.format(accel_3_params['TC_A3_ID']))
plt.ylabel('X bias (m/s/s)')
plt.xlabel('temperature (degC)')
plt.grid()
# draw plots
plt.subplot(3,1,2)
plt.plot(sensor_accel_3['temperature'],correction_y,'b')
plt.plot(temp_resample,correction_y_resample,'r')
plt.ylabel('Y bias (m/s/s)')
plt.xlabel('temperature (degC)')
plt.grid()
# draw plots
plt.subplot(3,1,3)
plt.plot(sensor_accel_3['temperature'],correction_z,'b')
plt.plot(temp_resample,correction_z_resample,'r')
plt.ylabel('Z bias (m/s/s)')
plt.xlabel('temperature (degC)')
plt.grid()
pp.savefig()
#################################################################################
#################################################################################
# define data dictionary of baro 0 thermal correction parameters
baro_0_params = {
'TC_B0_ID':0,
'TC_B0_TMIN':0.0,
'TC_B0_TMAX':0.0,
'TC_B0_TREF':0.0,
'TC_B0_X0':0.0,
'TC_B0_X1':0.0,
'TC_B0_X2':0.0,
'TC_B0_X3':0.0,
'TC_B0_X4':0.0,
'TC_B0_X5':0.0
}
# curve fit the data for baro 0 corrections
baro_0_params['TC_B0_ID'] = int(np.median(sensor_baro_0['device_id']))
# find the min, max and reference temperature
baro_0_params['TC_B0_TMIN'] = np.amin(sensor_baro_0['temperature'])
baro_0_params['TC_B0_TMAX'] = np.amax(sensor_baro_0['temperature'])
baro_0_params['TC_B0_TREF'] = 0.5 * (baro_0_params['TC_B0_TMIN'] + baro_0_params['TC_B0_TMAX'])
temp_rel = sensor_baro_0['temperature'] - baro_0_params['TC_B0_TREF']
temp_rel_resample = np.linspace(baro_0_params['TC_B0_TMIN']-baro_0_params['TC_B0_TREF'], baro_0_params['TC_B0_TMAX']-baro_0_params['TC_B0_TREF'], 100)
temp_resample = temp_rel_resample + baro_0_params['TC_B0_TREF']
# fit data
median_pressure = np.median(sensor_baro_0['pressure']);
if noResample:
coef_baro_0_x = np.polyfit(temp_rel,100*(sensor_baro_0['pressure']-median_pressure),5) # convert from hPa to Pa
else:
temperature, baro = resampleWithDeltaX(temp_rel,100*(sensor_baro_0['pressure']-median_pressure)) # convert from hPa to Pa
coef_baro_0_x = np.polyfit(temperature,baro,5)
baro_0_params['TC_B0_X5'] = coef_baro_0_x[0]
baro_0_params['TC_B0_X4'] = coef_baro_0_x[1]
baro_0_params['TC_B0_X3'] = coef_baro_0_x[2]
baro_0_params['TC_B0_X2'] = coef_baro_0_x[3]
baro_0_params['TC_B0_X1'] = coef_baro_0_x[4]
baro_0_params['TC_B0_X0'] = coef_baro_0_x[5]
fit_coef_baro_0_x = np.poly1d(coef_baro_0_x)
baro_0_x_resample = fit_coef_baro_0_x(temp_rel_resample)
# baro 0 vs temperature
plt.figure(9,figsize=(20,13))
# draw plots
plt.plot(sensor_baro_0['temperature'],100*sensor_baro_0['pressure']-100*median_pressure,'b')
plt.plot(temp_resample,baro_0_x_resample,'r')
plt.title('Baro 0 ({}) Bias vs Temperature'.format(baro_0_params['TC_B0_ID']))
plt.ylabel('Z bias (Pa)')
plt.xlabel('temperature (degC)')
plt.grid()
pp.savefig()
# define data dictionary of baro 1 thermal correction parameters
baro_1_params = {
'TC_B1_ID':0,
'TC_B1_TMIN':0.0,
'TC_B1_TMAX':0.0,
'TC_B1_TREF':0.0,
'TC_B1_X0':0.0,
'TC_B1_X1':0.0,
'TC_B1_X2':0.0,
'TC_B1_X3':0.0,
'TC_B1_X4':0.0,
'TC_B1_X5':0.0,
}
if num_baros >= 2:
# curve fit the data for baro 1 corrections
baro_1_params['TC_B1_ID'] = int(np.median(sensor_baro_1['device_id']))
# find the min, max and reference temperature
baro_1_params['TC_B1_TMIN'] = np.amin(sensor_baro_1['temperature'])
baro_1_params['TC_B1_TMAX'] = np.amax(sensor_baro_1['temperature'])
baro_1_params['TC_B1_TREF'] = 0.5 * (baro_1_params['TC_B1_TMIN'] + baro_1_params['TC_B1_TMAX'])
temp_rel = sensor_baro_1['temperature'] - baro_1_params['TC_B1_TREF']
temp_rel_resample = np.linspace(baro_1_params['TC_B1_TMIN']-baro_1_params['TC_B1_TREF'], baro_1_params['TC_B1_TMAX']-baro_1_params['TC_B1_TREF'], 100)
temp_resample = temp_rel_resample + baro_1_params['TC_B1_TREF']
# fit data
median_pressure = np.median(sensor_baro_1['pressure']);
if noResample:
coef_baro_1_x = np.polyfit(temp_rel,100*(sensor_baro_1['pressure']-median_pressure),5) # convert from hPa to Pa
else:
temperature, baro = resampleWithDeltaX(temp_rel,100*(sensor_baro_1['pressure']-median_pressure)) # convert from hPa to Pa
coef_baro_1_x = np.polyfit(temperature,baro,5)
baro_1_params['TC_B1_X5'] = coef_baro_1_x[0]
baro_1_params['TC_B1_X4'] = coef_baro_1_x[1]
baro_1_params['TC_B1_X3'] = coef_baro_1_x[2]
baro_1_params['TC_B1_X2'] = coef_baro_1_x[3]
baro_1_params['TC_B1_X1'] = coef_baro_1_x[4]
baro_1_params['TC_B1_X0'] = coef_baro_1_x[5]
fit_coef_baro_1_x = np.poly1d(coef_baro_1_x)
baro_1_x_resample = fit_coef_baro_1_x(temp_rel_resample)
# baro 2 vs temperature
plt.figure(9,figsize=(20,13))
# draw plots
plt.plot(sensor_baro_1['temperature'],100*sensor_baro_1['pressure']-100*median_pressure,'b')
plt.plot(temp_resample,baro_1_x_resample,'r')
plt.title('Baro 1 ({}) Bias vs Temperature'.format(baro_1_params['TC_B1_ID']))
plt.ylabel('Z bias (Pa)')
plt.xlabel('temperature (degC)')
plt.grid()
pp.savefig()
# define data dictionary of baro 2 thermal correction parameters
baro_2_params = {
'TC_B2_ID':0,
'TC_B2_TMIN':0.0,
'TC_B2_TMAX':0.0,
'TC_B2_TREF':0.0,
'TC_B2_X0':0.0,
'TC_B2_X1':0.0,
'TC_B2_X2':0.0,
'TC_B2_X3':0.0,
'TC_B2_X4':0.0,
'TC_B2_X5':0.0,
'TC_B2_SCL':1.0,
}
if num_baros >= 3:
# curve fit the data for baro 2 corrections
baro_2_params['TC_B2_ID'] = int(np.median(sensor_baro_2['device_id']))
# find the min, max and reference temperature
baro_2_params['TC_B2_TMIN'] = np.amin(sensor_baro_2['temperature'])
baro_2_params['TC_B2_TMAX'] = np.amax(sensor_baro_2['temperature'])
baro_2_params['TC_B2_TREF'] = 0.5 * (baro_2_params['TC_B2_TMIN'] + baro_2_params['TC_B2_TMAX'])
temp_rel = sensor_baro_2['temperature'] - baro_2_params['TC_B2_TREF']
temp_rel_resample = np.linspace(baro_2_params['TC_B2_TMIN']-baro_2_params['TC_B2_TREF'], baro_2_params['TC_B2_TMAX']-baro_2_params['TC_B2_TREF'], 100)
temp_resample = temp_rel_resample + baro_2_params['TC_B2_TREF']
# fit data
median_pressure = np.median(sensor_baro_2['pressure']);
if noResample:
coef_baro_2_x = np.polyfit(temp_rel,100*(sensor_baro_2['pressure']-median_pressure),5) # convert from hPa to Pa
else:
temperature, baro = resampleWithDeltaX(temp_rel,100*(sensor_baro_2['pressure']-median_pressure)) # convert from hPa to Pa
coef_baro_2_x = np.polyfit(temperature,baro,5)
baro_2_params['TC_B2_X5'] = coef_baro_2_x[0]
baro_2_params['TC_B2_X4'] = coef_baro_2_x[1]
baro_2_params['TC_B2_X3'] = coef_baro_2_x[2]
baro_2_params['TC_B2_X2'] = coef_baro_2_x[3]
baro_2_params['TC_B2_X1'] = coef_baro_2_x[4]
baro_2_params['TC_B2_X0'] = coef_baro_2_x[5]
fit_coef_baro_2_x = np.poly1d(coef_baro_2_x)
baro_2_x_resample = fit_coef_baro_2_x(temp_rel_resample)
# baro 2 vs temperature
plt.figure(10,figsize=(20,13))
# draw plots
plt.plot(sensor_baro_2['temperature'],100*sensor_baro_2['pressure']-100*median_pressure,'b')
plt.plot(temp_resample,baro_2_x_resample,'r')
plt.title('Baro 2 ({}) Bias vs Temperature'.format(baro_2_params['TC_B2_ID']))
plt.ylabel('Z bias (Pa)')
plt.xlabel('temperature (degC)')
plt.grid()
pp.savefig()
# define data dictionary of baro 3 thermal correction parameters
baro_3_params = {
'TC_B3_ID':0,
'TC_B3_TMIN':0.0,
'TC_B3_TMAX':0.0,
'TC_B3_TREF':0.0,
'TC_B3_X0':0.0,
'TC_B3_X1':0.0,
'TC_B3_X2':0.0,
'TC_B3_X3':0.0,
'TC_B3_X4':0.0,
'TC_B3_X5':0.0,
'TC_B3_SCL':1.0,
}
if num_baros >= 4:
# curve fit the data for baro 2 corrections
baro_3_params['TC_B3_ID'] = int(np.median(sensor_baro_3['device_id']))
# find the min, max and reference temperature
baro_3_params['TC_B3_TMIN'] = np.amin(sensor_baro_3['temperature'])
baro_3_params['TC_B3_TMAX'] = np.amax(sensor_baro_3['temperature'])
baro_3_params['TC_B3_TREF'] = 0.5 * (baro_3_params['TC_B3_TMIN'] + baro_3_params['TC_B3_TMAX'])
temp_rel = sensor_baro_3['temperature'] - baro_3_params['TC_B3_TREF']
temp_rel_resample = np.linspace(baro_3_params['TC_B3_TMIN']-baro_3_params['TC_B3_TREF'], baro_3_params['TC_B3_TMAX']-baro_3_params['TC_B3_TREF'], 100)
temp_resample = temp_rel_resample + baro_3_params['TC_B3_TREF']
# fit data
median_pressure = np.median(sensor_baro_3['pressure'])
coef_baro_3_x = np.polyfit(temp_rel,100*(sensor_baro_3['pressure']-median_pressure),5) # convert from hPa to Pa
baro_3_params['TC_B3_X5'] = coef_baro_3_x[0]
baro_3_params['TC_B3_X4'] = coef_baro_3_x[1]
baro_3_params['TC_B3_X3'] = coef_baro_3_x[2]
baro_3_params['TC_B3_X2'] = coef_baro_3_x[3]
baro_3_params['TC_B3_X1'] = coef_baro_3_x[4]
baro_3_params['TC_B3_X0'] = coef_baro_3_x[5]
fit_coef_baro_3_x = np.poly1d(coef_baro_3_x)
baro_3_x_resample = fit_coef_baro_3_x(temp_rel_resample)
# baro 3 vs temperature
plt.figure(11,figsize=(20,13))
# draw plots
plt.plot(sensor_baro_3['temperature'],100*sensor_baro_3['pressure']-100*median_pressure,'b')
plt.plot(temp_resample,baro_3_x_resample,'r')
plt.title('Baro 3 ({}) Bias vs Temperature'.format(baro_3_params['TC_B3_ID']))
plt.ylabel('Z bias (Pa)')
plt.xlabel('temperature (degC)')
plt.grid()
pp.savefig()
#################################################################################
# close the pdf file
pp.close()
# clase all figures
plt.close("all")
# write correction parameters to file
test_results_filename = ulog_file_name + ".params"
file = open(test_results_filename,"w")
file.write("# Sensor thermal compensation parameters\n")
file.write("#\n")
file.write("# Vehicle-Id Component-Id Name Value Type\n")
# accel 0 corrections
key_list_accel = list(accel_0_params.keys())
key_list_accel.sort
for key in key_list_accel:
if key == 'TC_A0_ID':
type = "6"
else:
type = "9"
file.write("1"+"\t"+"1"+"\t"+key+"\t"+str(accel_0_params[key])+"\t"+type+"\n")
# accel 1 corrections
key_list_accel = list(accel_1_params.keys())
key_list_accel.sort
for key in key_list_accel:
if key == 'TC_A1_ID':
type = "6"
else:
type = "9"
file.write("1"+"\t"+"1"+"\t"+key+"\t"+str(accel_1_params[key])+"\t"+type+"\n")
# accel 2 corrections
key_list_accel = list(accel_2_params.keys())
key_list_accel.sort
for key in key_list_accel:
if key == 'TC_A2_ID':
type = "6"
else:
type = "9"
file.write("1"+"\t"+"1"+"\t"+key+"\t"+str(accel_2_params[key])+"\t"+type+"\n")
# accel 3 corrections
key_list_accel = list(accel_3_params.keys())
key_list_accel.sort
for key in key_list_accel:
if key == 'TC_A3_ID':
type = "6"
else:
type = "9"
file.write("1"+"\t"+"1"+"\t"+key+"\t"+str(accel_3_params[key])+"\t"+type+"\n")
# baro 0 corrections
key_list_baro = list(baro_0_params.keys())
key_list_baro.sort
for key in key_list_baro:
if key == 'TC_B0_ID':
type = "6"
else:
type = "9"
file.write("1"+"\t"+"1"+"\t"+key+"\t"+str(baro_0_params[key])+"\t"+type+"\n")
# baro 1 corrections
key_list_baro = list(baro_1_params.keys())
key_list_baro.sort
for key in key_list_baro:
if key == 'TC_B1_ID':
type = "6"
else:
type = "9"
file.write("1"+"\t"+"1"+"\t"+key+"\t"+str(baro_1_params[key])+"\t"+type+"\n")
# baro 2 corrections
key_list_baro = list(baro_2_params.keys())
key_list_baro.sort
for key in key_list_baro:
if key == 'TC_B2_ID':
type = "6"
else:
type = "9"
file.write("1"+"\t"+"1"+"\t"+key+"\t"+str(baro_2_params[key])+"\t"+type+"\n")
# baro 3 corrections
key_list_baro = list(baro_3_params.keys())
key_list_baro.sort
for key in key_list_baro:
if key == 'TC_B3_ID':
type = "6"
else:
type = "9"
file.write("1"+"\t"+"1"+"\t"+key+"\t"+str(baro_3_params[key])+"\t"+type+"\n")
# gyro 0 corrections
key_list_gyro = list(gyro_0_params.keys())
key_list_gyro.sort()
for key in key_list_gyro:
if key == 'TC_G0_ID':
type = "6"
else:
type = "9"
file.write("1"+"\t"+"1"+"\t"+key+"\t"+str(gyro_0_params[key])+"\t"+type+"\n")
# gyro 1 corrections
key_list_gyro = list(gyro_1_params.keys())
key_list_gyro.sort()
for key in key_list_gyro:
if key == 'TC_G1_ID':
type = "6"
else:
type = "9"
file.write("1"+"\t"+"1"+"\t"+key+"\t"+str(gyro_1_params[key])+"\t"+type+"\n")
# gyro 2 corrections
key_list_gyro = list(gyro_2_params.keys())
key_list_gyro.sort()
for key in key_list_gyro:
if key == 'TC_G2_ID':
type = "6"
else:
type = "9"
file.write("1"+"\t"+"1"+"\t"+key+"\t"+str(gyro_2_params[key])+"\t"+type+"\n")
# gyro 3 corrections
key_list_gyro = list(gyro_3_params.keys())
key_list_gyro.sort()
for key in key_list_gyro:
if key == 'TC_G3_ID':
type = "6"
else:
type = "9"
file.write("1"+"\t"+"1"+"\t"+key+"\t"+str(gyro_3_params[key])+"\t"+type+"\n")
file.close()
print('Correction parameters written to ' + test_results_filename)
print('Plots saved to ' + output_plot_filename)
| bsd-3-clause |
NelisVerhoef/scikit-learn | sklearn/utils/metaestimators.py | 283 | 2353 | """Utilities for meta-estimators"""
# Author: Joel Nothman
# Andreas Mueller
# Licence: BSD
from operator import attrgetter
from functools import update_wrapper
__all__ = ['if_delegate_has_method']
class _IffHasAttrDescriptor(object):
"""Implements a conditional property using the descriptor protocol.
Using this class to create a decorator will raise an ``AttributeError``
if the ``attribute_name`` is not present on the base object.
This allows ducktyping of the decorated method based on ``attribute_name``.
See https://docs.python.org/3/howto/descriptor.html for an explanation of
descriptors.
"""
def __init__(self, fn, attribute_name):
self.fn = fn
self.get_attribute = attrgetter(attribute_name)
# update the docstring of the descriptor
update_wrapper(self, fn)
def __get__(self, obj, type=None):
# raise an AttributeError if the attribute is not present on the object
if obj is not None:
# delegate only on instances, not the classes.
# this is to allow access to the docstrings.
self.get_attribute(obj)
# lambda, but not partial, allows help() to work with update_wrapper
out = lambda *args, **kwargs: self.fn(obj, *args, **kwargs)
# update the docstring of the returned function
update_wrapper(out, self.fn)
return out
def if_delegate_has_method(delegate):
"""Create a decorator for methods that are delegated to a sub-estimator
This enables ducktyping by hasattr returning True according to the
sub-estimator.
>>> from sklearn.utils.metaestimators import if_delegate_has_method
>>>
>>>
>>> class MetaEst(object):
... def __init__(self, sub_est):
... self.sub_est = sub_est
...
... @if_delegate_has_method(delegate='sub_est')
... def predict(self, X):
... return self.sub_est.predict(X)
...
>>> class HasPredict(object):
... def predict(self, X):
... return X.sum(axis=1)
...
>>> class HasNoPredict(object):
... pass
...
>>> hasattr(MetaEst(HasPredict()), 'predict')
True
>>> hasattr(MetaEst(HasNoPredict()), 'predict')
False
"""
return lambda fn: _IffHasAttrDescriptor(fn, '%s.%s' % (delegate, fn.__name__))
| bsd-3-clause |
BlueBrain/deap | deap/gp.py | 9 | 46662 | # This file is part of DEAP.
#
# DEAP is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of
# the License, or (at your option) any later version.
#
# DEAP is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with DEAP. If not, see <http://www.gnu.org/licenses/>.
"""The :mod:`gp` module provides the methods and classes to perform
Genetic Programming with DEAP. It essentially contains the classes to
build a Genetic Program Tree, and the functions to evaluate it.
This module support both strongly and loosely typed GP.
"""
import copy
import math
import random
import re
import sys
import warnings
from collections import defaultdict, deque
from functools import partial, wraps
from inspect import isclass
from operator import eq, lt
import tools # Needed by HARM-GP
######################################
# GP Data structure #
######################################
# Define the name of type for any types.
__type__ = object
class PrimitiveTree(list):
"""Tree specifically formatted for optimization of genetic
programming operations. The tree is represented with a
list where the nodes are appended in a depth-first order.
The nodes appended to the tree are required to
have an attribute *arity* which defines the arity of the
primitive. An arity of 0 is expected from terminals nodes.
"""
def __init__(self, content):
list.__init__(self, content)
def __deepcopy__(self, memo):
new = self.__class__(self)
new.__dict__.update(copy.deepcopy(self.__dict__, memo))
return new
def __setitem__(self, key, val):
# Check for most common errors
# Does NOT check for STGP constraints
if isinstance(key, slice):
if key.start >= len(self):
raise IndexError("Invalid slice object (try to assign a %s"
" in a tree of size %d). Even if this is allowed by the"
" list object slice setter, this should not be done in"
" the PrimitiveTree context, as this may lead to an"
" unpredictable behavior for searchSubtree or evaluate."
% (key, len(self)))
total = val[0].arity
for node in val[1:]:
total += node.arity - 1
if total != 0:
raise ValueError("Invalid slice assignation : insertion of"
" an incomplete subtree is not allowed in PrimitiveTree."
" A tree is defined as incomplete when some nodes cannot"
" be mapped to any position in the tree, considering the"
" primitives' arity. For instance, the tree [sub, 4, 5,"
" 6] is incomplete if the arity of sub is 2, because it"
" would produce an orphan node (the 6).")
elif val.arity != self[key].arity:
raise ValueError("Invalid node replacement with a node of a"
" different arity.")
list.__setitem__(self, key, val)
def __str__(self):
"""Return the expression in a human readable string.
"""
string = ""
stack = []
for node in self:
stack.append((node, []))
while len(stack[-1][1]) == stack[-1][0].arity:
prim, args = stack.pop()
string = prim.format(*args)
if len(stack) == 0:
break # If stack is empty, all nodes should have been seen
stack[-1][1].append(string)
return string
@classmethod
def from_string(cls, string, pset):
"""Try to convert a string expression into a PrimitiveTree given a
PrimitiveSet *pset*. The primitive set needs to contain every primitive
present in the expression.
:param string: String representation of a Python expression.
:param pset: Primitive set from which primitives are selected.
:returns: PrimitiveTree populated with the deserialized primitives.
"""
tokens = re.split("[ \t\n\r\f\v(),]", string)
expr = []
ret_types = deque()
for token in tokens:
if token == '':
continue
if len(ret_types) != 0:
type_ = ret_types.popleft()
else:
type_ = None
if token in pset.mapping:
primitive = pset.mapping[token]
if type_ is not None and not issubclass(primitive.ret, type_):
raise TypeError("Primitive {} return type {} does not "
"match the expected one: {}."
.format(primitive, primitive.ret, type_))
expr.append(primitive)
if isinstance(primitive, Primitive):
ret_types.extendleft(reversed(primitive.args))
else:
try:
token = eval(token)
except NameError:
raise TypeError("Unable to evaluate terminal: {}.".format(token))
if type_ is None:
type_ = type(token)
if not issubclass(type(token), type_):
raise TypeError("Terminal {} type {} does not "
"match the expected one: {}."
.format(token, type(token), type_))
expr.append(Terminal(token, False, type_))
return cls(expr)
@property
def height(self):
"""Return the height of the tree, or the depth of the
deepest node.
"""
stack = [0]
max_depth = 0
for elem in self:
depth = stack.pop()
max_depth = max(max_depth, depth)
stack.extend([depth + 1] * elem.arity)
return max_depth
@property
def root(self):
"""Root of the tree, the element 0 of the list.
"""
return self[0]
def searchSubtree(self, begin):
"""Return a slice object that corresponds to the
range of values that defines the subtree which has the
element with index *begin* as its root.
"""
end = begin + 1
total = self[begin].arity
while total > 0:
total += self[end].arity - 1
end += 1
return slice(begin, end)
class Primitive(object):
"""Class that encapsulates a primitive and when called with arguments it
returns the Python code to call the primitive with the arguments.
>>> pr = Primitive("mul", (int, int), int)
>>> pr.format(1, 2)
'mul(1, 2)'
"""
__slots__ = ('name', 'arity', 'args', 'ret', 'seq')
def __init__(self, name, args, ret):
self.name = name
self.arity = len(args)
self.args = args
self.ret = ret
args = ", ".join(map("{{{0}}}".format, range(self.arity)))
self.seq = "{name}({args})".format(name=self.name, args=args)
def format(self, *args):
return self.seq.format(*args)
def __eq__(self, other):
if type(self) is type(other):
return all(getattr(self, slot) == getattr(other, slot)
for slot in self.__slots__)
else:
return NotImplemented
class Terminal(object):
"""Class that encapsulates terminal primitive in expression. Terminals can
be values or 0-arity functions.
"""
__slots__ = ('name', 'value', 'ret', 'conv_fct')
def __init__(self, terminal, symbolic, ret):
self.ret = ret
self.value = terminal
self.name = str(terminal)
self.conv_fct = str if symbolic else repr
@property
def arity(self):
return 0
def format(self):
return self.conv_fct(self.value)
def __eq__(self, other):
if type(self) is type(other):
return all(getattr(self, slot) == getattr(other, slot)
for slot in self.__slots__)
else:
return NotImplemented
class Ephemeral(Terminal):
"""Class that encapsulates a terminal which value is set when the
object is created. To mutate the value, a new object has to be
generated. This is an abstract base class. When subclassing, a
staticmethod 'func' must be defined.
"""
def __init__(self):
Terminal.__init__(self, self.func(), symbolic=False, ret=self.ret)
@staticmethod
def func():
"""Return a random value used to define the ephemeral state.
"""
raise NotImplementedError
class PrimitiveSetTyped(object):
"""Class that contains the primitives that can be used to solve a
Strongly Typed GP problem. The set also defined the researched
function return type, and input arguments type and number.
"""
def __init__(self, name, in_types, ret_type, prefix="ARG"):
self.terminals = defaultdict(list)
self.primitives = defaultdict(list)
self.arguments = []
# setting "__builtins__" to None avoid the context
# being polluted by builtins function when evaluating
# GP expression.
self.context = {"__builtins__": None}
self.mapping = dict()
self.terms_count = 0
self.prims_count = 0
self.name = name
self.ret = ret_type
self.ins = in_types
for i, type_ in enumerate(in_types):
arg_str = "{prefix}{index}".format(prefix=prefix, index=i)
self.arguments.append(arg_str)
term = Terminal(arg_str, True, type_)
self._add(term)
self.terms_count += 1
def renameArguments(self, **kargs):
"""Rename function arguments with new names from *kargs*.
"""
for i, old_name in enumerate(self.arguments):
if old_name in kargs:
new_name = kargs[old_name]
self.arguments[i] = new_name
self.mapping[new_name] = self.mapping[old_name]
self.mapping[new_name].value = new_name
del self.mapping[old_name]
def _add(self, prim):
def addType(dict_, ret_type):
if not ret_type in dict_:
new_list = []
for type_, list_ in dict_.items():
if issubclass(type_, ret_type):
for item in list_:
if not item in new_list:
new_list.append(item)
dict_[ret_type] = new_list
addType(self.primitives, prim.ret)
addType(self.terminals, prim.ret)
self.mapping[prim.name] = prim
if isinstance(prim, Primitive):
for type_ in prim.args:
addType(self.primitives, type_)
addType(self.terminals, type_)
dict_ = self.primitives
else:
dict_ = self.terminals
for type_ in dict_:
if issubclass(prim.ret, type_):
dict_[type_].append(prim)
def addPrimitive(self, primitive, in_types, ret_type, name=None):
"""Add a primitive to the set.
:param primitive: callable object or a function.
:parma in_types: list of primitives arguments' type
:param ret_type: type returned by the primitive.
:param name: alternative name for the primitive instead
of its __name__ attribute.
"""
if name is None:
name = primitive.__name__
prim = Primitive(name, in_types, ret_type)
assert name not in self.context or \
self.context[name] is primitive, \
"Primitives are required to have a unique name. " \
"Consider using the argument 'name' to rename your "\
"second '%s' primitive." % (name,)
self._add(prim)
self.context[prim.name] = primitive
self.prims_count += 1
def addTerminal(self, terminal, ret_type, name=None):
"""Add a terminal to the set. Terminals can be named
using the optional *name* argument. This should be
used : to define named constant (i.e.: pi); to speed the
evaluation time when the object is long to build; when
the object does not have a __repr__ functions that returns
the code to build the object; when the object class is
not a Python built-in.
:param terminal: Object, or a function with no arguments.
:param ret_type: Type of the terminal.
:param name: defines the name of the terminal in the expression.
"""
symbolic = False
if name is None and callable(terminal):
name = terminal.__name__
assert name not in self.context, \
"Terminals are required to have a unique name. " \
"Consider using the argument 'name' to rename your "\
"second %s terminal." % (name,)
if name is not None:
self.context[name] = terminal
terminal = name
symbolic = True
elif terminal in (True, False):
# To support True and False terminals with Python 2.
self.context[str(terminal)] = terminal
prim = Terminal(terminal, symbolic, ret_type)
self._add(prim)
self.terms_count += 1
def addEphemeralConstant(self, name, ephemeral, ret_type):
"""Add an ephemeral constant to the set. An ephemeral constant
is a no argument function that returns a random value. The value
of the constant is constant for a Tree, but may differ from one
Tree to another.
:param name: name used to refers to this ephemeral type.
:param ephemeral: function with no arguments returning a random value.
:param ret_type: type of the object returned by *ephemeral*.
"""
module_gp = globals()
if not name in module_gp:
class_ = type(name, (Ephemeral,), {'func': staticmethod(ephemeral),
'ret': ret_type})
module_gp[name] = class_
else:
class_ = module_gp[name]
if issubclass(class_, Ephemeral):
if class_.func is not ephemeral:
raise Exception("Ephemerals with different functions should "
"be named differently, even between psets.")
elif class_.ret is not ret_type:
raise Exception("Ephemerals with the same name and function "
"should have the same type, even between psets.")
else:
raise Exception("Ephemerals should be named differently "
"than classes defined in the gp module.")
self._add(class_)
self.terms_count += 1
def addADF(self, adfset):
"""Add an Automatically Defined Function (ADF) to the set.
:param adfset: PrimitiveSetTyped containing the primitives with which
the ADF can be built.
"""
prim = Primitive(adfset.name, adfset.ins, adfset.ret)
self._add(prim)
self.prims_count += 1
@property
def terminalRatio(self):
"""Return the ratio of the number of terminals on the number of all
kind of primitives.
"""
return self.terms_count / float(self.terms_count + self.prims_count)
class PrimitiveSet(PrimitiveSetTyped):
"""Class same as :class:`~deap.gp.PrimitiveSetTyped`, except there is no
definition of type.
"""
def __init__(self, name, arity, prefix="ARG"):
args = [__type__] * arity
PrimitiveSetTyped.__init__(self, name, args, __type__, prefix)
def addPrimitive(self, primitive, arity, name=None):
"""Add primitive *primitive* with arity *arity* to the set.
If a name *name* is provided, it will replace the attribute __name__
attribute to represent/identify the primitive.
"""
assert arity > 0, "arity should be >= 1"
args = [__type__] * arity
PrimitiveSetTyped.addPrimitive(self, primitive, args, __type__, name)
def addTerminal(self, terminal, name=None):
"""Add a terminal to the set."""
PrimitiveSetTyped.addTerminal(self, terminal, __type__, name)
def addEphemeralConstant(self, name, ephemeral):
"""Add an ephemeral constant to the set."""
PrimitiveSetTyped.addEphemeralConstant(self, name, ephemeral, __type__)
######################################
# GP Tree compilation functions #
######################################
def compile(expr, pset):
"""Compile the expression *expr*.
:param expr: Expression to compile. It can either be a PrimitiveTree,
a string of Python code or any object that when
converted into string produced a valid Python code
expression.
:param pset: Primitive set against which the expression is compile.
:returns: a function if the primitive set has 1 or more arguments,
or return the results produced by evaluating the tree.
"""
code = str(expr)
if len(pset.arguments) > 0:
# This section is a stripped version of the lambdify
# function of SymPy 0.6.6.
args = ",".join(arg for arg in pset.arguments)
code = "lambda {args}: {code}".format(args=args, code=code)
try:
return eval(code, pset.context, {})
except MemoryError:
_, _, traceback = sys.exc_info()
raise MemoryError, ("DEAP : Error in tree evaluation :"
" Python cannot evaluate a tree higher than 90. "
"To avoid this problem, you should use bloat control on your "
"operators. See the DEAP documentation for more information. "
"DEAP will now abort."), traceback
def compileADF(expr, psets):
"""Compile the expression represented by a list of trees. The first
element of the list is the main tree, and the following elements are
automatically defined functions (ADF) that can be called by the first
tree.
:param expr: Expression to compile. It can either be a PrimitiveTree,
a string of Python code or any object that when
converted into string produced a valid Python code
expression.
:param psets: List of primitive sets. Each set corresponds to an ADF
while the last set is associated with the expression
and should contain reference to the preceding ADFs.
:returns: a function if the main primitive set has 1 or more arguments,
or return the results produced by evaluating the tree.
"""
adfdict = {}
func = None
for pset, subexpr in reversed(zip(psets, expr)):
pset.context.update(adfdict)
func = compile(subexpr, pset)
adfdict.update({pset.name: func})
return func
######################################
# GP Program generation functions #
######################################
def genFull(pset, min_, max_, type_=None):
"""Generate an expression where each leaf has a the same depth
between *min* and *max*.
:param pset: Primitive set from which primitives are selected.
:param min_: Minimum height of the produced trees.
:param max_: Maximum Height of the produced trees.
:param type_: The type that should return the tree when called, when
:obj:`None` (default) the type of :pset: (pset.ret)
is assumed.
:returns: A full tree with all leaves at the same depth.
"""
def condition(height, depth):
"""Expression generation stops when the depth is equal to height."""
return depth == height
return generate(pset, min_, max_, condition, type_)
def genGrow(pset, min_, max_, type_=None):
"""Generate an expression where each leaf might have a different depth
between *min* and *max*.
:param pset: Primitive set from which primitives are selected.
:param min_: Minimum height of the produced trees.
:param max_: Maximum Height of the produced trees.
:param type_: The type that should return the tree when called, when
:obj:`None` (default) the type of :pset: (pset.ret)
is assumed.
:returns: A grown tree with leaves at possibly different depths.
"""
def condition(height, depth):
"""Expression generation stops when the depth is equal to height
or when it is randomly determined that a a node should be a terminal.
"""
return depth == height or \
(depth >= min_ and random.random() < pset.terminalRatio)
return generate(pset, min_, max_, condition, type_)
def genHalfAndHalf(pset, min_, max_, type_=None):
"""Generate an expression with a PrimitiveSet *pset*.
Half the time, the expression is generated with :func:`~deap.gp.genGrow`,
the other half, the expression is generated with :func:`~deap.gp.genFull`.
:param pset: Primitive set from which primitives are selected.
:param min_: Minimum height of the produced trees.
:param max_: Maximum Height of the produced trees.
:param type_: The type that should return the tree when called, when
:obj:`None` (default) the type of :pset: (pset.ret)
is assumed.
:returns: Either, a full or a grown tree.
"""
method = random.choice((genGrow, genFull))
return method(pset, min_, max_, type_)
def genRamped(pset, min_, max_, type_=None):
"""
.. deprecated:: 1.0
The function has been renamed. Use :func:`~deap.gp.genHalfAndHalf` instead.
"""
warnings.warn("gp.genRamped has been renamed. Use genHalfAndHalf instead.",
FutureWarning)
return genHalfAndHalf(pset, min_, max_, type_)
def generate(pset, min_, max_, condition, type_=None):
"""Generate a Tree as a list of list. The tree is build
from the root to the leaves, and it stop growing when the
condition is fulfilled.
:param pset: Primitive set from which primitives are selected.
:param min_: Minimum height of the produced trees.
:param max_: Maximum Height of the produced trees.
:param condition: The condition is a function that takes two arguments,
the height of the tree to build and the current
depth in the tree.
:param type_: The type that should return the tree when called, when
:obj:`None` (default) the type of :pset: (pset.ret)
is assumed.
:returns: A grown tree with leaves at possibly different depths
dependending on the condition function.
"""
if type_ is None:
type_ = pset.ret
expr = []
height = random.randint(min_, max_)
stack = [(0, type_)]
while len(stack) != 0:
depth, type_ = stack.pop()
if condition(height, depth):
try:
term = random.choice(pset.terminals[type_])
except IndexError:
_, _, traceback = sys.exc_info()
raise IndexError, "The gp.generate function tried to add "\
"a terminal of type '%s', but there is "\
"none available." % (type_,), traceback
if isclass(term):
term = term()
expr.append(term)
else:
try:
prim = random.choice(pset.primitives[type_])
except IndexError:
_, _, traceback = sys.exc_info()
raise IndexError, "The gp.generate function tried to add "\
"a primitive of type '%s', but there is "\
"none available." % (type_,), traceback
expr.append(prim)
for arg in reversed(prim.args):
stack.append((depth + 1, arg))
return expr
######################################
# GP Crossovers #
######################################
def cxOnePoint(ind1, ind2):
"""Randomly select in each individual and exchange each subtree with the
point as root between each individual.
:param ind1: First tree participating in the crossover.
:param ind2: Second tree participating in the crossover.
:returns: A tuple of two trees.
"""
if len(ind1) < 2 or len(ind2) < 2:
# No crossover on single node tree
return ind1, ind2
# List all available primitive types in each individual
types1 = defaultdict(list)
types2 = defaultdict(list)
if ind1.root.ret == __type__:
# Not STGP optimization
types1[__type__] = xrange(1, len(ind1))
types2[__type__] = xrange(1, len(ind2))
common_types = [__type__]
else:
for idx, node in enumerate(ind1[1:], 1):
types1[node.ret].append(idx)
for idx, node in enumerate(ind2[1:], 1):
types2[node.ret].append(idx)
common_types = set(types1.keys()).intersection(set(types2.keys()))
if len(common_types) > 0:
type_ = random.choice(list(common_types))
index1 = random.choice(types1[type_])
index2 = random.choice(types2[type_])
slice1 = ind1.searchSubtree(index1)
slice2 = ind2.searchSubtree(index2)
ind1[slice1], ind2[slice2] = ind2[slice2], ind1[slice1]
return ind1, ind2
def cxOnePointLeafBiased(ind1, ind2, termpb):
"""Randomly select crossover point in each individual and exchange each
subtree with the point as root between each individual.
:param ind1: First typed tree participating in the crossover.
:param ind2: Second typed tree participating in the crossover.
:param termpb: The probability of chosing a terminal node (leaf).
:returns: A tuple of two typed trees.
When the nodes are strongly typed, the operator makes sure the
second node type corresponds to the first node type.
The parameter *termpb* sets the probability to choose between a terminal
or non-terminal crossover point. For instance, as defined by Koza, non-
terminal primitives are selected for 90% of the crossover points, and
terminals for 10%, so *termpb* should be set to 0.1.
"""
if len(ind1) < 2 or len(ind2) < 2:
# No crossover on single node tree
return ind1, ind2
# Determine wether we keep terminals or primitives for each individual
terminal_op = partial(eq, 0)
primitive_op = partial(lt, 0)
arity_op1 = terminal_op if random.random() < termpb else primitive_op
arity_op2 = terminal_op if random.random() < termpb else primitive_op
# List all available primitive or terminal types in each individual
types1 = defaultdict(list)
types2 = defaultdict(list)
for idx, node in enumerate(ind1[1:], 1):
if arity_op1(node.arity):
types1[node.ret].append(idx)
for idx, node in enumerate(ind2[1:], 1):
if arity_op2(node.arity):
types2[node.ret].append(idx)
common_types = set(types1.keys()).intersection(set(types2.keys()))
if len(common_types) > 0:
# Set does not support indexing
type_ = random.sample(common_types, 1)[0]
index1 = random.choice(types1[type_])
index2 = random.choice(types2[type_])
slice1 = ind1.searchSubtree(index1)
slice2 = ind2.searchSubtree(index2)
ind1[slice1], ind2[slice2] = ind2[slice2], ind1[slice1]
return ind1, ind2
######################################
# GP Mutations #
######################################
def mutUniform(individual, expr, pset):
"""Randomly select a point in the tree *individual*, then replace the
subtree at that point as a root by the expression generated using method
:func:`expr`.
:param individual: The tree to be mutated.
:param expr: A function object that can generate an expression when
called.
:returns: A tuple of one tree.
"""
index = random.randrange(len(individual))
slice_ = individual.searchSubtree(index)
type_ = individual[index].ret
individual[slice_] = expr(pset=pset, type_=type_)
return individual,
def mutNodeReplacement(individual, pset):
"""Replaces a randomly chosen primitive from *individual* by a randomly
chosen primitive with the same number of arguments from the :attr:`pset`
attribute of the individual.
:param individual: The normal or typed tree to be mutated.
:returns: A tuple of one tree.
"""
if len(individual) < 2:
return individual,
index = random.randrange(1, len(individual))
node = individual[index]
if node.arity == 0: # Terminal
term = random.choice(pset.terminals[node.ret])
if isclass(term):
term = term()
individual[index] = term
else: # Primitive
prims = [p for p in pset.primitives[node.ret] if p.args == node.args]
individual[index] = random.choice(prims)
return individual,
def mutEphemeral(individual, mode):
"""This operator works on the constants of the tree *individual*. In
*mode* ``"one"``, it will change the value of one of the individual
ephemeral constants by calling its generator function. In *mode*
``"all"``, it will change the value of **all** the ephemeral constants.
:param individual: The normal or typed tree to be mutated.
:param mode: A string to indicate to change ``"one"`` or ``"all"``
ephemeral constants.
:returns: A tuple of one tree.
"""
if mode not in ["one", "all"]:
raise ValueError("Mode must be one of \"one\" or \"all\"")
ephemerals_idx = [index
for index, node in enumerate(individual)
if isinstance(node, Ephemeral)]
if len(ephemerals_idx) > 0:
if mode == "one":
ephemerals_idx = (random.choice(ephemerals_idx),)
for i in ephemerals_idx:
individual[i] = type(individual[i])()
return individual,
def mutInsert(individual, pset):
"""Inserts a new branch at a random position in *individual*. The subtree
at the chosen position is used as child node of the created subtree, in
that way, it is really an insertion rather than a replacement. Note that
the original subtree will become one of the children of the new primitive
inserted, but not perforce the first (its position is randomly selected if
the new primitive has more than one child).
:param individual: The normal or typed tree to be mutated.
:returns: A tuple of one tree.
"""
index = random.randrange(len(individual))
node = individual[index]
slice_ = individual.searchSubtree(index)
choice = random.choice
# As we want to keep the current node as children of the new one,
# it must accept the return value of the current node
primitives = [p for p in pset.primitives[node.ret] if node.ret in p.args]
if len(primitives) == 0:
return individual,
new_node = choice(primitives)
new_subtree = [None] * len(new_node.args)
position = choice([i for i, a in enumerate(new_node.args) if a == node.ret])
for i, arg_type in enumerate(new_node.args):
if i != position:
term = choice(pset.terminals[arg_type])
if isclass(term):
term = term()
new_subtree[i] = term
new_subtree[position:position + 1] = individual[slice_]
new_subtree.insert(0, new_node)
individual[slice_] = new_subtree
return individual,
def mutShrink(individual):
"""This operator shrinks the *individual* by chosing randomly a branch and
replacing it with one of the branch's arguments (also randomly chosen).
:param individual: The tree to be shrinked.
:returns: A tuple of one tree.
"""
# We don't want to "shrink" the root
if len(individual) < 3 or individual.height <= 1:
return individual,
iprims = []
for i, node in enumerate(individual[1:], 1):
if isinstance(node, Primitive) and node.ret in node.args:
iprims.append((i, node))
if len(iprims) != 0:
index, prim = random.choice(iprims)
arg_idx = random.choice([i for i, type_ in enumerate(prim.args) if type_ == prim.ret])
rindex = index + 1
for _ in range(arg_idx + 1):
rslice = individual.searchSubtree(rindex)
subtree = individual[rslice]
rindex += len(subtree)
slice_ = individual.searchSubtree(index)
individual[slice_] = subtree
return individual,
######################################
# GP bloat control decorators #
######################################
def staticLimit(key, max_value):
"""Implement a static limit on some measurement on a GP tree, as defined
by Koza in [Koza1989]. It may be used to decorate both crossover and
mutation operators. When an invalid (over the limit) child is generated,
it is simply replaced by one of its parents, randomly selected.
This operator can be used to avoid memory errors occuring when the tree
gets higher than 90 levels (as Python puts a limit on the call stack
depth), because it can ensure that no tree higher than this limit will ever
be accepted in the population, except if it was generated at initialization
time.
:param key: The function to use in order the get the wanted value. For
instance, on a GP tree, ``operator.attrgetter('height')`` may
be used to set a depth limit, and ``len`` to set a size limit.
:param max_value: The maximum value allowed for the given measurement.
:returns: A decorator that can be applied to a GP operator using \
:func:`~deap.base.Toolbox.decorate`
.. note::
If you want to reproduce the exact behavior intended by Koza, set
*key* to ``operator.attrgetter('height')`` and *max_value* to 17.
.. [Koza1989] J.R. Koza, Genetic Programming - On the Programming of
Computers by Means of Natural Selection (MIT Press,
Cambridge, MA, 1992)
"""
def decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
keep_inds = [copy.deepcopy(ind) for ind in args]
new_inds = list(func(*args, **kwargs))
for i, ind in enumerate(new_inds):
if key(ind) > max_value:
new_inds[i] = random.choice(keep_inds)
return new_inds
return wrapper
return decorator
######################################
# GP bloat control algorithms #
######################################
def harm(population, toolbox, cxpb, mutpb, ngen,
alpha, beta, gamma, rho, nbrindsmodel=-1, mincutoff=20,
stats=None, halloffame=None, verbose=__debug__):
"""Implement bloat control on a GP evolution using HARM-GP, as defined in
[Gardner2015]. It is implemented in the form of an evolution algorithm
(similar to :func:`~deap.algorithms.eaSimple`).
:param population: A list of individuals.
:param toolbox: A :class:`~deap.base.Toolbox` that contains the evolution
operators.
:param cxpb: The probability of mating two individuals.
:param mutpb: The probability of mutating an individual.
:param ngen: The number of generation.
:param alpha: The HARM *alpha* parameter.
:param beta: The HARM *beta* parameter.
:param gamma: The HARM *gamma* parameter.
:param rho: The HARM *rho* parameter.
:param nbrindsmodel: The number of individuals to generate in order to
model the natural distribution. -1 is a special
value which uses the equation proposed in
[Gardner2015] to set the value of this parameter :
max(2000, len(population))
:param mincutoff: The absolute minimum value for the cutoff point. It is
used to ensure that HARM does not shrink the population
too much at the beginning of the evolution. The default
value is usually fine.
:param stats: A :class:`~deap.tools.Statistics` object that is updated
inplace, optional.
:param halloffame: A :class:`~deap.tools.HallOfFame` object that will
contain the best individuals, optional.
:param verbose: Whether or not to log the statistics.
:returns: The final population
:returns: A class:`~deap.tools.Logbook` with the statistics of the
evolution
This function expects the :meth:`toolbox.mate`, :meth:`toolbox.mutate`,
:meth:`toolbox.select` and :meth:`toolbox.evaluate` aliases to be
registered in the toolbox.
.. note::
The recommended values for the HARM-GP parameters are *alpha=0.05*,
*beta=10*, *gamma=0.25*, *rho=0.9*. However, these parameters can be
adjusted to perform better on a specific problem (see the relevant
paper for tuning information). The number of individuals used to
model the natural distribution and the minimum cutoff point are less
important, their default value being effective in most cases.
.. [Gardner2015] M.-A. Gardner, C. Gagne, and M. Parizeau, Controlling
Code Growth by Dynamically Shaping the Genotype Size Distribution,
Genetic Programming and Evolvable Machines, 2015,
DOI 10.1007/s10710-015-9242-8
"""
def _genpop(n, pickfrom=[], acceptfunc=lambda s: True, producesizes=False):
# Generate a population of n individuals, using individuals in
# *pickfrom* if possible, with a *acceptfunc* acceptance function.
# If *producesizes* is true, also return a list of the produced
# individuals sizes.
# This function is used 1) to generate the natural distribution
# (in this case, pickfrom and acceptfunc should be let at their
# default values) and 2) to generate the final population, in which
# case pickfrom should be the natural population previously generated
# and acceptfunc a function implementing the HARM-GP algorithm.
producedpop = []
producedpopsizes = []
while len(producedpop) < n:
if len(pickfrom) > 0:
# If possible, use the already generated
# individuals (more efficient)
aspirant = pickfrom.pop()
if acceptfunc(len(aspirant)):
producedpop.append(aspirant)
if producesizes:
producedpopsizes.append(len(aspirant))
else:
opRandom = random.random()
if opRandom < cxpb:
# Crossover
aspirant1, aspirant2 = toolbox.mate(*map(toolbox.clone,
toolbox.select(population, 2)))
del aspirant1.fitness.values, aspirant2.fitness.values
if acceptfunc(len(aspirant1)):
producedpop.append(aspirant1)
if producesizes:
producedpopsizes.append(len(aspirant1))
if len(producedpop) < n and acceptfunc(len(aspirant2)):
producedpop.append(aspirant2)
if producesizes:
producedpopsizes.append(len(aspirant2))
else:
aspirant = toolbox.clone(toolbox.select(population, 1)[0])
if opRandom - cxpb < mutpb:
# Mutation
aspirant = toolbox.mutate(aspirant)[0]
del aspirant.fitness.values
if acceptfunc(len(aspirant)):
producedpop.append(aspirant)
if producesizes:
producedpopsizes.append(len(aspirant))
if producesizes:
return producedpop, producedpopsizes
else:
return producedpop
halflifefunc = lambda x: (x * float(alpha) + beta)
if nbrindsmodel == -1:
nbrindsmodel = max(2000, len(population))
logbook = tools.Logbook()
logbook.header = ['gen', 'nevals'] + (stats.fields if stats else [])
# Evaluate the individuals with an invalid fitness
invalid_ind = [ind for ind in population if not ind.fitness.valid]
fitnesses = toolbox.map(toolbox.evaluate, invalid_ind)
for ind, fit in zip(invalid_ind, fitnesses):
ind.fitness.values = fit
if halloffame is not None:
halloffame.update(population)
record = stats.compile(population) if stats else {}
logbook.record(gen=0, nevals=len(invalid_ind), **record)
if verbose:
print logbook.stream
# Begin the generational process
for gen in range(1, ngen + 1):
# Estimation population natural distribution of sizes
naturalpop, naturalpopsizes = _genpop(nbrindsmodel, producesizes=True)
naturalhist = [0] * (max(naturalpopsizes) + 3)
for indsize in naturalpopsizes:
# Kernel density estimation application
naturalhist[indsize] += 0.4
naturalhist[indsize - 1] += 0.2
naturalhist[indsize + 1] += 0.2
naturalhist[indsize + 2] += 0.1
if indsize - 2 >= 0:
naturalhist[indsize - 2] += 0.1
# Normalization
naturalhist = [val * len(population) / nbrindsmodel for val in naturalhist]
# Cutoff point selection
sortednatural = sorted(naturalpop, key=lambda ind: ind.fitness)
cutoffcandidates = sortednatural[int(len(population) * rho - 1):]
# Select the cutoff point, with an absolute minimum applied
# to avoid weird cases in the first generations
cutoffsize = max(mincutoff, len(min(cutoffcandidates, key=len)))
# Compute the target distribution
targetfunc = lambda x: (gamma * len(population) * math.log(2) /
halflifefunc(x)) * math.exp(-math.log(2) *
(x - cutoffsize) / halflifefunc(x))
targethist = [naturalhist[binidx] if binidx <= cutoffsize else
targetfunc(binidx) for binidx in range(len(naturalhist))]
# Compute the probabilities distribution
probhist = [t / n if n > 0 else t for n, t in zip(naturalhist, targethist)]
probfunc = lambda s: probhist[s] if s < len(probhist) else targetfunc(s)
acceptfunc = lambda s: random.random() <= probfunc(s)
# Generate offspring using the acceptance probabilities
# previously computed
offspring = _genpop(len(population), pickfrom=naturalpop,
acceptfunc=acceptfunc, producesizes=False)
# Evaluate the individuals with an invalid fitness
invalid_ind = [ind for ind in offspring if not ind.fitness.valid]
fitnesses = toolbox.map(toolbox.evaluate, invalid_ind)
for ind, fit in zip(invalid_ind, fitnesses):
ind.fitness.values = fit
# Update the hall of fame with the generated individuals
if halloffame is not None:
halloffame.update(offspring)
# Replace the current population by the offspring
population[:] = offspring
# Append the current generation statistics to the logbook
record = stats.compile(population) if stats else {}
logbook.record(gen=gen, nevals=len(invalid_ind), **record)
if verbose:
print logbook.stream
return population, logbook
def graph(expr):
"""Construct the graph of a tree expression. The tree expression must be
valid. It returns in order a node list, an edge list, and a dictionary of
the per node labels. The node are represented by numbers, the edges are
tuples connecting two nodes (number), and the labels are values of a
dictionary for which keys are the node numbers.
:param expr: A tree expression to convert into a graph.
:returns: A node list, an edge list, and a dictionary of labels.
The returned objects can be used directly to populate a
`pygraphviz <http://networkx.lanl.gov/pygraphviz/>`_ graph::
import pygraphviz as pgv
# [...] Execution of code that produce a tree expression
nodes, edges, labels = graph(expr)
g = pgv.AGraph()
g.add_nodes_from(nodes)
g.add_edges_from(edges)
g.layout(prog="dot")
for i in nodes:
n = g.get_node(i)
n.attr["label"] = labels[i]
g.draw("tree.pdf")
or a `NetworX <http://networkx.github.com/>`_ graph::
import matplotlib.pyplot as plt
import networkx as nx
# [...] Execution of code that produce a tree expression
nodes, edges, labels = graph(expr)
g = nx.Graph()
g.add_nodes_from(nodes)
g.add_edges_from(edges)
pos = nx.graphviz_layout(g, prog="dot")
nx.draw_networkx_nodes(g, pos)
nx.draw_networkx_edges(g, pos)
nx.draw_networkx_labels(g, pos, labels)
plt.show()
.. note::
We encourage you to use `pygraphviz
<http://networkx.lanl.gov/pygraphviz/>`_ as the nodes might be plotted
out of order when using `NetworX <http://networkx.github.com/>`_.
"""
nodes = range(len(expr))
edges = list()
labels = dict()
stack = []
for i, node in enumerate(expr):
if stack:
edges.append((stack[-1][0], i))
stack[-1][1] -= 1
labels[i] = node.name if isinstance(node, Primitive) else node.value
stack.append([i, node.arity])
while stack and stack[-1][1] == 0:
stack.pop()
return nodes, edges, labels
if __name__ == "__main__":
import doctest
doctest.testmod()
| lgpl-3.0 |
miloszz/DIRAC | Core/Utilities/Graphs/CurveGraph.py | 10 | 5056 | ########################################################################
# $HeadURL$
########################################################################
""" CurveGraph represents simple line graphs with markers.
The DIRAC Graphs package is derived from the GraphTool plotting package of the
CMS/Phedex Project by ... <to be added>
"""
__RCSID__ = "$Id$"
from DIRAC.Core.Utilities.Graphs.PlotBase import PlotBase
from DIRAC.Core.Utilities.Graphs.GraphUtilities import darkenColor, to_timestamp, PrettyDateLocator, \
PrettyDateFormatter, PrettyScalarFormatter
from matplotlib.lines import Line2D
from matplotlib.dates import date2num
import datetime
class CurveGraph( PlotBase ):
"""
The CurveGraph class is a straightforward line graph with markers
"""
def __init__(self,data,ax,prefs,*args,**kw):
PlotBase.__init__(self,data,ax,prefs,*args,**kw)
def draw( self ):
PlotBase.draw(self)
self.x_formatter_cb(self.ax)
if self.gdata.isEmpty():
return None
start_plot = 0
end_plot = 0
if "starttime" in self.prefs and "endtime" in self.prefs:
start_plot = date2num( datetime.datetime.fromtimestamp(to_timestamp(self.prefs['starttime'])))
end_plot = date2num( datetime.datetime.fromtimestamp(to_timestamp(self.prefs['endtime'])))
labels = self.gdata.getLabels()
labels.reverse()
# If it is a simple plot, no labels are used
# Evaluate the most appropriate color in this case
if self.gdata.isSimplePlot():
labels = [('SimplePlot',0.)]
color = self.prefs.get('plot_color','Default')
if color.find('#') != -1:
self.palette.setColor('SimplePlot',color)
else:
labels = [(color,0.)]
tmp_max_y = []
tmp_min_y = []
tmp_x = []
for label,num in labels:
xdata = []
ydata = []
xerror = []
yerror = []
color = self.palette.getColor(label)
plot_data = self.gdata.getPlotNumData(label)
for key, value, error in plot_data:
if value is None:
continue
tmp_x.append( key )
tmp_max_y.append( value + error )
tmp_min_y.append( value - error )
xdata.append( key )
ydata.append( value )
xerror.append( 0. )
yerror.append( error )
linestyle = self.prefs.get( 'linestyle', '-' )
marker = self.prefs.get( 'marker', 'o' )
markersize = self.prefs.get( 'markersize', 8. )
markeredgewidth = self.prefs.get( 'markeredgewidth', 1. )
if not self.prefs.get( 'error_bars', False ):
line = Line2D( xdata, ydata, color=color, linewidth=1., marker=marker, linestyle=linestyle,
markersize=markersize, markeredgewidth=markeredgewidth,
markeredgecolor = darkenColor( color ) )
self.ax.add_line( line )
else:
self.ax.errorbar( xdata, ydata, color=color, linewidth=2., marker=marker, linestyle=linestyle,
markersize=markersize, markeredgewidth=markeredgewidth,
markeredgecolor = darkenColor( color ), xerr = xerror, yerr = yerror,
ecolor=color )
ymax = max( tmp_max_y )
ymax *= 1.1
ymin = min( tmp_min_y, 0. )
ymin *= 1.1
if self.prefs.has_key('log_yaxis'):
ymin = 0.001
xmax=max(tmp_x)*1.1
if self.log_xaxis:
xmin = 0.001
else:
xmin = 0
ymin = self.prefs.get( 'ymin', ymin )
ymax = self.prefs.get( 'ymax', ymax )
xmin = self.prefs.get( 'xmin', xmin )
xmax = self.prefs.get( 'xmax', xmax )
self.ax.set_xlim( xmin=xmin, xmax=xmax )
self.ax.set_ylim( ymin=ymin, ymax=ymax )
if self.gdata.key_type == 'time':
if start_plot and end_plot:
self.ax.set_xlim( xmin=start_plot, xmax=end_plot)
else:
self.ax.set_xlim( xmin=min(tmp_x), xmax=max(tmp_x))
def x_formatter_cb( self, ax ):
if self.gdata.key_type == "string":
smap = self.gdata.getStringMap()
reverse_smap = {}
for key, val in smap.items():
reverse_smap[val] = key
ticks = smap.values()
ticks.sort()
ax.set_xticks( [i+.5 for i in ticks] )
ax.set_xticklabels( [reverse_smap[i] for i in ticks] )
labels = ax.get_xticklabels()
ax.grid( False )
if self.log_xaxis:
xmin = 0.001
else:
xmin = 0
ax.set_xlim( xmin=xmin,xmax=len(ticks) )
elif self.gdata.key_type == "time":
dl = PrettyDateLocator()
df = PrettyDateFormatter( dl )
ax.xaxis.set_major_locator( dl )
ax.xaxis.set_major_formatter( df )
ax.xaxis.set_clip_on(False)
sf = PrettyScalarFormatter( )
ax.yaxis.set_major_formatter( sf )
else:
try:
super(CurveGraph, self).x_formatter_cb( ax )
except:
return None
| gpl-3.0 |
wanggang3333/scikit-learn | sklearn/decomposition/tests/test_factor_analysis.py | 222 | 3055 | # Author: Christian Osendorfer <[email protected]>
# Alexandre Gramfort <[email protected]>
# Licence: BSD3
import numpy as np
from sklearn.utils.testing import assert_warns
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_greater
from sklearn.utils.testing import assert_less
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils import ConvergenceWarning
from sklearn.decomposition import FactorAnalysis
def test_factor_analysis():
# Test FactorAnalysis ability to recover the data covariance structure
rng = np.random.RandomState(0)
n_samples, n_features, n_components = 20, 5, 3
# Some random settings for the generative model
W = rng.randn(n_components, n_features)
# latent variable of dim 3, 20 of it
h = rng.randn(n_samples, n_components)
# using gamma to model different noise variance
# per component
noise = rng.gamma(1, size=n_features) * rng.randn(n_samples, n_features)
# generate observations
# wlog, mean is 0
X = np.dot(h, W) + noise
assert_raises(ValueError, FactorAnalysis, svd_method='foo')
fa_fail = FactorAnalysis()
fa_fail.svd_method = 'foo'
assert_raises(ValueError, fa_fail.fit, X)
fas = []
for method in ['randomized', 'lapack']:
fa = FactorAnalysis(n_components=n_components, svd_method=method)
fa.fit(X)
fas.append(fa)
X_t = fa.transform(X)
assert_equal(X_t.shape, (n_samples, n_components))
assert_almost_equal(fa.loglike_[-1], fa.score_samples(X).sum())
assert_almost_equal(fa.score_samples(X).mean(), fa.score(X))
diff = np.all(np.diff(fa.loglike_))
assert_greater(diff, 0., 'Log likelihood dif not increase')
# Sample Covariance
scov = np.cov(X, rowvar=0., bias=1.)
# Model Covariance
mcov = fa.get_covariance()
diff = np.sum(np.abs(scov - mcov)) / W.size
assert_less(diff, 0.1, "Mean absolute difference is %f" % diff)
fa = FactorAnalysis(n_components=n_components,
noise_variance_init=np.ones(n_features))
assert_raises(ValueError, fa.fit, X[:, :2])
f = lambda x, y: np.abs(getattr(x, y)) # sign will not be equal
fa1, fa2 = fas
for attr in ['loglike_', 'components_', 'noise_variance_']:
assert_almost_equal(f(fa1, attr), f(fa2, attr))
fa1.max_iter = 1
fa1.verbose = True
assert_warns(ConvergenceWarning, fa1.fit, X)
# Test get_covariance and get_precision with n_components == n_features
# with n_components < n_features and with n_components == 0
for n_components in [0, 2, X.shape[1]]:
fa.n_components = n_components
fa.fit(X)
cov = fa.get_covariance()
precision = fa.get_precision()
assert_array_almost_equal(np.dot(cov, precision),
np.eye(X.shape[1]), 12)
| bsd-3-clause |
zangsir/sms-tools | lectures/03-Fourier-properties/plots-code/linearity.py | 26 | 1183 | import matplotlib.pyplot as plt
import numpy as np
from scipy.io.wavfile import read
from scipy.fftpack import fft, ifft
a = 0.5
b = 1.0
k1 = 20
k2 = 25
N = 128
x1 = a*np.exp(1j*2*np.pi*k1/N*np.arange(N))
x2 = b*np.exp(1j*2*np.pi*k2/N*np.arange(N))
plt.figure(1, figsize=(9.5, 7))
plt.subplot(321)
plt.title('x1 (amp=.5, freq=20)')
plt.plot(np.arange(0, N, 1.0), np.real(x1), lw=1.5)
plt.axis([0, N, -1, 1])
plt.subplot(322)
plt.title('x2 (amp=1, freq=25)')
plt.plot(np.arange(0, N, 1.0), np.real(x2), lw=1.5)
plt.axis([0, N, -1, 1])
X1 = fft(x1)
mX1 = abs(X1)/N
plt.subplot(323)
plt.title('mX1 (amp=.5, freq=20)')
plt.plot(np.arange(0, N, 1.0), mX1, 'r', lw=1.5)
plt.axis([0,N,0,1])
X2 = fft(x2)
mX2 = abs(X2)/N
plt.subplot(324)
plt.title('mX2 (amp=1, freq=25)')
plt.plot(np.arange(0, N, 1.0), mX2, 'r', lw=1.5)
plt.axis([0,N,0,1])
x = x1 + x2
plt.subplot(325)
plt.title('mX1+mX2')
plt.plot(np.arange(0, N, 1.0), mX1+mX2, 'r', lw=1.5)
plt.axis([0, N, 0, 1])
X = fft(x)
mX= abs(X)/N
plt.subplot(326)
plt.title('DFT(x1+x2)')
plt.plot(np.arange(0, N, 1.0), mX, 'r', lw=1.5)
plt.axis([0,N,0,1])
plt.tight_layout()
plt.savefig('linearity.png')
plt.show()
| agpl-3.0 |
JEB12345/PhD_Docs_UCSC | Dissertation/tex/ASME-journal/results/testing/logs/script-python-analyzelogs.py | 6 | 5096 | import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import prettyplotlib as ppl
data = np.loadtxt("muscleRestLengths.csv", delimiter=",")[200:,1:] #discard first 2 seconds and time column
data_normalized = data-data.mean(0)
data_normalized /= data_normalized.std(0)
#plot box data
plt.figure()
plt.boxplot(data,0,'')
#from matplotlib.mlab import PCA
#r = PCA(data_normalized)
#plt.plot(r.fracs,'+-') #only four "principal" components!
#because there are only four steps in the signal!
#plt.figure()
#plt.title("PCA first 4 principal components")
#plt.plot(r.project(data_normalized)[:,:4]+8*np.arange(4))
#now do something useful
#signal periodicity is 400 BTW
#plt.figure()
#plt.plot(data_normalized+np.arange(24)*4,'blue')
#plt.plot(data_normalized[:,11]+11*4,'red',linewidth=4)
#plt.title('Actuator 11 is different!')
#plt.ylabel("muscleRestLengths")
#slow
ncc = np.zeros((data_normalized.shape[1],data_normalized.shape[1]))
for i in xrange(data_normalized.shape[1]):
for j in xrange(data_normalized.shape[1]):
ncc[i,j] = np.abs(np.correlate(data_normalized[:,i],data_normalized[:,j],mode='full')/(data_normalized.shape[0]-1)).max()
#most signals are highly correlated!!!
#let's look at the delays
ncc_delay = np.zeros((data_normalized.shape[1],data_normalized.shape[1]))
for i in xrange(data_normalized.shape[1]):
for j in xrange(data_normalized.shape[1]):
ncc_delay[i,j] = (np.correlate(data_normalized[:,i],data_normalized[:,j],mode='full')).argmax()-5800+1
#time for something simple: the average equilibrium length of each motor
#plt.figure()
#plt.plot(np.sort(data.mean(0)),'+-')
#they all seem different, so the algorithm hasn't learned a symmetric gait
#combine results
#plt.figure()
#plt.subplot(1,2,1)
#plt.imshow(ncc,interpolation='nearest',cmap=plt.cm.gray_r)
#plt.title("Normalized Cross-Correlation")
#plt.colorbar()
#plt.subplot(1,2,2)
#plt.imshow(np.where(np.abs(ncc_delay)%400>=200,400-np.abs(ncc_delay)%400,np.abs(ncc_delay)%400),interpolation='nearest',cmap=plt.cm.gray_r)
#plt.title("Optimal time delay")
#plt.colorbar()
#Let's time shift all the signals to align with signal 0
data_normalized_aligned = np.zeros(data.shape)
for i in xrange(24):
data_normalized_aligned[:,i] = (np.roll(data_normalized[:,i],int(ncc_delay[0,i])))
#THIS LOOKS FUNNY
#plt.figure()
#plt.plot(data_normalized_aligned)
#plt.title("time aligned normalized signals")
#plt.figure()
#for i in xrange(24):
# plt.plot(data_normalized[:,np.argsort(ncc_delay[0])[i]]+2*i)
#plt.title("signals ordered by time diff wrt signal 0, black = delay")
#for i in xrange(10):
# plt.plot(-np.sort(ncc_delay[0])+400*i,np.arange(24)*2,'black',linewidth=4)
#if we look at the PCA of this, then 98% of the variance is explained by the first 2 components!
#plt.figure()
#r2 = PCA(data_normalized_aligned)
#plt.plot(r2.project(data_normalized_aligned)[:,:2]+8*np.arange(2))
#you need a sign wave and a double frequency sign wave
#td = np.zeros(ncc.shape)
#for i in xrange(24):
# td[i,:] = np.sort((ncc_delay[i]+400+int(ncc_delay[0,i]))%800)
#plt.figure()
#plt.title("sequential activation of motors")
#plt.plot(td.T)
##Atil
import scipy.cluster.hierarchy as hclus
#do clustering using the correlation matrix # (1-ncc) or ncc doesn't matter
plt.figure()
linkage_matrix=hclus.linkage(1-ncc,method='ward');
dend = hclus.dendrogram(linkage_matrix,
color_threshold=0.3,
show_leaf_counts=True)
#order the correlation matrix according to the clustering
ncc_ordered = np.zeros((data_normalized.shape[1],data_normalized.shape[1]))
for i in xrange(data_normalized.shape[1]):
for j in xrange(data_normalized.shape[1]):
ncc_ordered[i,j]=ncc[dend['leaves'][i],j]
#show normalized cross correlation coeff
f, axarr = plt.subplots(1,2)
im1 = axarr[0].imshow(ncc,interpolation='nearest',cmap=plt.cm.gray_r)
im2 = axarr[1].imshow(ncc_ordered,interpolation='nearest',cmap=plt.cm.gray_r)
f.colorbar(im2)
#order signals according to the clustering
data_normalized_aligned_ordered = np.zeros(data.shape)
for i in xrange(24):
ii=dend['leaves'][i];
data_normalized_aligned_ordered[:,i] = data_normalized_aligned[:,ii]
f, axarr = plt.subplots(2,1)
im2 = axarr[0].plot(data_normalized+np.arange(24)*4,'blue')
im1 = axarr[1].plot(data_normalized_aligned_ordered+np.arange(24)*4,'blue')
f.axes[0].get_xaxis().set_visible(False)
f.axes[0].get_yaxis().set_visible(False)
f.axes[1].get_yaxis().set_visible(False)
#f.axes[1].text(0.3,0.3,'s\nd\nf\ng',fontsize=10)
for i in xrange(24):
axarr[0].annotate(i,xy=(5800,4*i),xytext=(0,0),textcoords='offset points',fontsize=10)
rect1 = matplotlib.patches.Rectangle((2350-ncc_delay[0,i],4*i-2), 1200, 4, color='red',alpha=0.2)
axarr[0].add_patch(rect1)
rect1 = matplotlib.patches.Rectangle((2750-ncc_delay[0,i],4*i-2), 400, 4, color='yellow')
axarr[0].add_patch(rect1)
axarr[1].annotate(dend['leaves'][i],xy=(5800,4*i),xytext=((i%1),0),textcoords='offset points',fontsize=10)
axarr[1].axvspan(2350,3550,color='red',alpha=0.2)
axarr[1].axvspan(2750,3150,color='yellow')
| mit |
RJT1990/pyflux | pyflux/gas/gasreg.py | 1 | 32794 | import sys
if sys.version_info < (3,):
range = xrange
import numpy as np
import pandas as pd
import scipy.stats as ss
import scipy.special as sp
from patsy import dmatrices, dmatrix, demo_data
from .. import families as fam
from .. import tsm as tsm
from .. import data_check as dc
from .gas_core_recursions import gas_reg_recursion
class GASReg(tsm.TSM):
""" Inherits time series methods from TSM class.
**** GENERALIZED AUTOREGRESSIVE SCORE (GAS) REGRESSION MODELS ****
Parameters
----------
formula : string
patsy string describing the regression
data : pd.DataFrame or np.array
Field to specify the data that will be used
family : GAS family object
Which distribution to use, e.g. GASNormal()
"""
def __init__(self, formula, data, family):
# Initialize TSM object
super(GASReg,self).__init__('GASReg')
# Latent Variables
self.max_lag = 0
self._z_hide = 0 # Whether to cutoff variance latent variables from results
self.supported_methods = ["MLE","PML","Laplace","M-H","BBVI"]
self.default_method = "MLE"
self.multivariate_model = False
self.skewness = False
# Format the data
self.is_pandas = True # This is compulsory for this model type
self.data_original = data
self.formula = formula
self.y, self.X = dmatrices(formula, data)
self.y_name = self.y.design_info.describe()
self.X_names = self.X.design_info.describe().split(" + ")
self.y = self.y.astype(np.float)
self.X = self.X.astype(np.float)
self.z_no = self.X.shape[1]
self.data_name = self.y_name
self.y = np.array([self.y]).ravel()
self.data = self.y
self.X = np.array([self.X])[0]
self.index = data.index
self.initial_values = np.zeros(self.z_no)
self.data_length = self.data.shape[0]
self._create_model_matrices()
self._create_latent_variables()
self.family = family
self.model_name2, self.link, self.scale, self.shape, self.skewness, self.mean_transform, self.cythonized = self.family.setup()
# Identify whether model has cythonized backend - then choose update type
if self.cythonized is True:
self._model = self._cythonized_model
self._mb_model = self._cythonized_mb_model
self.recursion = self.family.gradientreg_recursion()
else:
self._model = self._uncythonized_model
self._mb_model = self._uncythonized_mb_model
self.model_name = self.model_name2 + " GAS Regression"
# Build any remaining latent variables that are specific to the family chosen
for no, i in enumerate(self.family.build_latent_variables()):
self.latent_variables.add_z(i[0],i[1],i[2])
self.latent_variables.z_list[no+self.z_no].start = i[3]
self.family_z_no = len(self.family.build_latent_variables())
self.z_no += len(self.family.build_latent_variables())
def _create_model_matrices(self):
""" Creates model matrices/vectors
Returns
----------
None (changes model attributes)
"""
self.model_Y = self.data
self.model_scores = np.zeros((self.X.shape[1], self.model_Y.shape[0]+1))
def _create_latent_variables(self):
""" Creates model latent variables
Returns
----------
None (changes model attributes)
"""
for parm in range(self.z_no):
self.latent_variables.add_z('Scale ' + self.X_names[parm], fam.Flat(transform='exp'), fam.Normal(0, 3))
self.latent_variables.z_list[parm].start = -5.0
self.z_no = len(self.latent_variables.z_list)
def _get_scale_and_shape(self,parm):
""" Obtains appropriate model scale and shape latent variables
Parameters
----------
parm : np.array
Transformed latent variable vector
Returns
----------
None (changes model attributes)
"""
if self.scale is True:
if self.shape is True:
model_shape = parm[-1]
model_scale = parm[-2]
else:
model_shape = 0
model_scale = parm[-1]
else:
model_scale = 0
model_shape = 0
if self.skewness is True:
model_skewness = parm[-3]
else:
model_skewness = 0
return model_scale, model_shape, model_skewness
def _get_scale_and_shape_sim(self, transformed_lvs):
""" Obtains model scale, shape, skewness latent variables for
a 2d array of simulations.
Parameters
----------
transformed_lvs : np.array
Transformed latent variable vector (2d - with draws of each variable)
Returns
----------
- Tuple of np.arrays (each being scale, shape and skewness draws)
"""
if self.scale is True:
if self.shape is True:
model_shape = self.latent_variables.z_list[-1].prior.transform(transformed_lvs[-1, :])
model_scale = self.latent_variables.z_list[-2].prior.transform(transformed_lvs[-2, :])
else:
model_shape = np.zeros(transformed_lvs.shape[1])
model_scale = self.latent_variables.z_list[-1].prior.transform(transformed_lvs[-1, :])
else:
model_scale = np.zeros(transformed_lvs.shape[1])
model_shape = np.zeros(transformed_lvs.shape[1])
if self.skewness is True:
model_skewness = self.latent_variables.z_list[-3].prior.transform(transformed_lvs[-3, :])
else:
model_skewness = np.zeros(transformed_lvs.shape[1])
return model_scale, model_shape, model_skewness
def _cythonized_model(self, beta):
""" Creates the structure of the model
Parameters
----------
beta : np.array
Contains untransformed starting values for latent variables
Returns
----------
theta : np.array
Contains the predicted values for the time series
Y : np.array
Contains the length-adjusted time series (accounting for lags)
scores : np.array
Contains the scores for the time series
"""
parm = np.array([self.latent_variables.z_list[k].prior.transform(beta[k]) for k in range(beta.shape[0])])
coefficients = np.zeros((self.X.shape[1],self.model_Y.shape[0]+1))
coefficients[:,0] = self.initial_values
theta = np.zeros(self.model_Y.shape[0]+1)
model_scale, model_shape, model_skewness = self._get_scale_and_shape(parm)
# Loop over time series
theta, self.model_scores, coefficients = self.recursion(parm, theta, self.X, coefficients, self.model_scores, self.model_Y, self.model_Y.shape[0], model_scale, model_shape, model_skewness)
return np.array(theta[:-1]), self.model_Y, self.model_scores, coefficients
def _cythonized_mb_model(self, beta, mini_batch):
""" Creates the structure of the model
Parameters
----------
beta : np.array
Contains untransformed starting values for latent variables
mini_batch : int
Size of each mini batch of data
Returns
----------
theta : np.array
Contains the predicted values for the time series
Y : np.array
Contains the length-adjusted time series (accounting for lags)
scores : np.array
Contains the scores for the time series
"""
rand_int = np.random.randint(low=0, high=self.data_length-mini_batch-self.max_lag+1)
sample = np.arange(start=rand_int, stop=rand_int+mini_batch)
data = self.data[sample]
X = self.X[sample, :]
Y = data[self.max_lag:]
parm = np.array([self.latent_variables.z_list[k].prior.transform(beta[k]) for k in range(beta.shape[0])])
coefficients = np.zeros((X.shape[1], Y.shape[0]+1))
coefficients[:,0] = self.initial_values
theta = np.zeros(Y.shape[0]+1)
model_scale, model_shape, model_skewness = self._get_scale_and_shape(parm)
# Loop over time series
theta, self.model_scores, coefficients = self.recursion(parm, theta, X, coefficients, self.model_scores, Y, Y.shape[0], model_scale, model_shape, model_skewness)
return np.array(theta[:-1]), Y, self.model_scores, coefficients
def _uncythonized_model(self, beta):
""" Creates the structure of the model
Parameters
----------
beta : np.array
Contains untransformed starting values for latent variables
Returns
----------
theta : np.array
Contains the predicted values for the time series
Y : np.array
Contains the length-adjusted time series (accounting for lags)
scores : np.array
Contains the scores for the time series
"""
parm = np.array([self.latent_variables.z_list[k].prior.transform(beta[k]) for k in range(beta.shape[0])])
coefficients = np.zeros((self.X.shape[1],self.model_Y.shape[0]+1))
coefficients[:,0] = self.initial_values
theta = np.zeros(self.model_Y.shape[0]+1)
model_scale, model_shape, model_skewness = self._get_scale_and_shape(parm)
# Loop over time series
theta, self.model_scores, coefficients = gas_reg_recursion(parm, theta, self.X, coefficients, self.model_scores, self.model_Y, self.model_Y.shape[0],
self.family.reg_score_function, self.link, model_scale, model_shape, model_skewness, self.max_lag)
return theta[:-1], self.model_Y, self.model_scores, coefficients
def _uncythonized_model(self, beta, mini_batch):
""" Creates the structure of the model
Parameters
----------
beta : np.array
Contains untransformed starting values for latent variables
mini_batch : int
Size of each mini batch of data
Returns
----------
theta : np.array
Contains the predicted values for the time series
Y : np.array
Contains the length-adjusted time series (accounting for lags)
scores : np.array
Contains the scores for the time series
"""
rand_int = np.random.randint(low=0, high=self.data_length-mini_batch-self.max_lag+1)
sample = np.arange(start=rand_int, stop=rand_int+mini_batch)
data = self.data[sample]
X = self.X[sample, :]
Y = data[self.max_lag:]
parm = np.array([self.latent_variables.z_list[k].prior.transform(beta[k]) for k in range(beta.shape[0])])
coefficients = np.zeros((X.shape[1], Y.shape[0]+1))
coefficients[:,0] = self.initial_values
theta = np.zeros(Y.shape[0]+1)
model_scale, model_shape, model_skewness = self._get_scale_and_shape(parm)
# Loop over time series
theta, self.model_scores, coefficients = gas_reg_recursion(parm, theta, X, coefficients, self.model_scores, Y, Y.shape[0],
self.family.reg_score_function, self.link, model_scale, model_shape, model_skewness, self.max_lag)
return theta[:-1], Y, self.model_scores, coefficients
def _preoptimize_model(self, initials, method):
""" Preoptimizes the model by estimating a static model, then a quick search of good dynamic parameters
Parameters
----------
initials : np.array
A vector of inital values
method : str
One of 'MLE' or 'PML' (the optimization options)
Returns
----------
Y_exp : np.array
Vector of past values and predictions
"""
# Random search for good starting values
start_values = []
start_values.append(np.ones(len(self.X_names))*-2.0)
start_values.append(np.ones(len(self.X_names))*-3.0)
start_values.append(np.ones(len(self.X_names))*-4.0)
start_values.append(np.ones(len(self.X_names))*-5.0)
best_start = self.latent_variables.get_z_starting_values()
best_lik = self.neg_loglik(self.latent_variables.get_z_starting_values())
proposal_start = best_start.copy()
for start in start_values:
proposal_start[:len(self.X_names)] = start
proposal_likelihood = self.neg_loglik(proposal_start)
if proposal_likelihood < best_lik:
best_lik = proposal_likelihood
best_start = proposal_start.copy()
return best_start
def neg_loglik(self, beta):
""" Returns the negative loglikelihood of the model
Parameters
----------
beta : np.array
Contains untransformed starting values for latent variables
"""
theta, Y, scores,_ = self._model(beta)
parm = np.array([self.latent_variables.z_list[k].prior.transform(beta[k]) for k in range(beta.shape[0])])
model_scale, model_shape, model_skewness = self._get_scale_and_shape(parm)
return self.family.neg_loglikelihood(Y,self.link(theta),model_scale,model_shape,model_skewness)
def mb_neg_loglik(self, beta, mini_batch):
""" Returns the negative loglikelihood of the model
Parameters
----------
beta : np.array
Contains untransformed starting values for latent variables
mini_batch : int
Size of each mini batch of data
"""
theta, Y, scores,_ = self._mb_model(beta, mini_batch)
parm = np.array([self.latent_variables.z_list[k].prior.transform(beta[k]) for k in range(beta.shape[0])])
model_scale, model_shape, model_skewness = self._get_scale_and_shape(parm)
return self.family.neg_loglikelihood(Y,self.link(theta),model_scale,model_shape,model_skewness)
def plot_fit(self, **kwargs):
""" Plots the fit of the model
Notes
----------
Intervals are bootstrapped as follows: take the filtered values from the
algorithm (thetas). Use these thetas to generate a pseudo data stream from
the measurement density. Use the GAS algorithm and estimated latent variables to
filter the pseudo data. Repeat this N times.
Returns
----------
None (plots data and the fit)
"""
import matplotlib.pyplot as plt
import seaborn as sns
figsize = kwargs.get('figsize',(10,7))
if self.latent_variables.estimated is False:
raise Exception("No latent variables estimated!")
else:
date_index = self.index.copy()
mu, Y, scores, coefficients = self._model(self.latent_variables.get_z_values())
if self.model_name2 == "Exponential":
values_to_plot = 1.0/self.link(mu)
elif self.model_name2 == "Skewt":
t_params = self.transform_z()
model_scale, model_shape, model_skewness = self._get_scale_and_shape(t_params)
m1 = (np.sqrt(model_shape)*sp.gamma((model_shape-1.0)/2.0))/(np.sqrt(np.pi)*sp.gamma(model_shape/2.0))
additional_loc = (model_skewness - (1.0/model_skewness))*model_scale*m1
values_to_plot = mu + additional_loc
else:
values_to_plot = self.link(mu)
plt.figure(figsize=figsize)
plt.subplot(len(self.X_names)+1, 1, 1)
plt.title(self.y_name + " Filtered")
plt.plot(date_index,Y,label='Data')
plt.plot(date_index,values_to_plot,label='GAS Filter',c='black')
plt.legend(loc=2)
for coef in range(0,len(self.X_names)):
plt.subplot(len(self.X_names)+1, 1, 2+coef)
plt.title("Beta " + self.X_names[coef])
plt.plot(date_index,coefficients[coef,0:-1],label='Coefficient')
plt.legend(loc=2)
plt.show()
def plot_predict(self, h=5, past_values=20, intervals=True, oos_data=None, **kwargs):
""" Makes forecast with the estimated model
Parameters
----------
h : int (default : 5)
How many steps ahead would you like to forecast?
past_values : int (default : 20)
How many past observations to show on the forecast graph?
intervals : Boolean
Would you like to show prediction intervals for the forecast?
oos_data : pd.DataFrame
Data for the variables to be used out of sample (ys can be NaNs)
Returns
----------
- Plot of the forecast
"""
import matplotlib.pyplot as plt
import seaborn as sns
figsize = kwargs.get('figsize',(10,7))
if self.latent_variables.estimated is False:
raise Exception("No latent variables estimated!")
else:
# Sort/manipulate the out-of-sample data
_, X_oos = dmatrices(self.formula, oos_data)
X_oos = np.array([X_oos])[0]
X_pred = X_oos[:h]
date_index = self.shift_dates(h)
if self.latent_variables.estimation_method in ['M-H']:
sim_vector = np.zeros([15000,h])
for n in range(0, 15000):
t_z = self.draw_latent_variables(nsims=1).T[0]
_, Y, _, coefficients = self._model(t_z)
coefficients_star = coefficients.T[-1]
theta_pred = np.dot(np.array([coefficients_star]), X_pred.T)[0]
t_z = np.array([self.latent_variables.z_list[k].prior.transform(t_z[k]) for k in range(t_z.shape[0])])
model_scale, model_shape, model_skewness = self._get_scale_and_shape(t_z)
sim_vector[n,:] = self.family.draw_variable(self.link(theta_pred), model_scale, model_shape, model_skewness, theta_pred.shape[0])
mean_values = np.append(Y, self.link(np.array([np.mean(i) for i in sim_vector.T])))
else:
# Retrieve data, dates and (transformed) latent variables
_, Y, _, coefficients = self._model(self.latent_variables.get_z_values())
coefficients_star = coefficients.T[-1]
theta_pred = np.dot(np.array([coefficients_star]), X_pred.T)[0]
t_z = self.transform_z()
sim_vector = np.zeros([15000,h])
mean_values = np.append(Y, self.link(theta_pred))
model_scale, model_shape, model_skewness = self._get_scale_and_shape(t_z)
if self.model_name2 == "Skewt":
m1 = (np.sqrt(model_shape)*sp.gamma((model_shape-1.0)/2.0))/(np.sqrt(np.pi)*sp.gamma(model_shape/2.0))
mean_values += (model_skewness - (1.0/model_skewness))*model_scale*m1
for n in range(0,15000):
sim_vector[n,:] = self.family.draw_variable(self.link(theta_pred),model_scale,model_shape,model_skewness,theta_pred.shape[0])
sim_vector = sim_vector.T
error_bars = []
for pre in range(5,100,5):
error_bars.append(np.insert([np.percentile(i,pre) for i in sim_vector], 0, mean_values[-h-1]))
forecasted_values = mean_values[-h-1:]
plot_values = mean_values[-h-past_values:]
plot_index = date_index[-h-past_values:]
plt.figure(figsize=figsize)
if intervals == True:
alpha =[0.15*i/float(100) for i in range(50,12,-2)]
for count in range(9):
plt.fill_between(date_index[-h-1:], error_bars[count], error_bars[-count],
alpha=alpha[count])
plt.plot(plot_index,plot_values)
plt.title("Forecast for " + self.data_name)
plt.xlabel("Time")
plt.ylabel(self.data_name)
plt.show()
def plot_predict_is(self, h=5, fit_once=True, fit_method='MLE', **kwargs):
""" Plots forecasts with the estimated model against data
(Simulated prediction with data)
Parameters
----------
h : int (default : 5)
How many steps to forecast
fit_once : boolean
(default: True) Fits only once before the in-sample prediction; if False, fits after every new datapoint
fit_method : string
Which method to fit the model with
Returns
----------
- Plot of the forecast against data
"""
import matplotlib.pyplot as plt
import seaborn as sns
figsize = kwargs.get('figsize', (10,7))
plt.figure(figsize=figsize)
predictions = self.predict_is(h=h, fit_method=fit_method, fit_once=fit_once)
data = self.data[-h:]
plt.plot(predictions.index,data,label='Data')
plt.plot(predictions.index,predictions,label='Predictions',c='black')
plt.title(self.y_name)
plt.legend(loc=2)
plt.show()
def predict(self, h=5, oos_data=None, intervals=False, **kwargs):
""" Makes forecast with the estimated model
Parameters
----------
h : int (default : 5)
How many steps ahead would you like to forecast?
oos_data : pd.DataFrame
Data for the variables to be used out of sample (ys can be NaNs)
intervals : boolean (default: False)
Whether to return prediction intervals
Returns
----------
- pd.DataFrame with predicted values
"""
if self.latent_variables.estimated is False:
raise Exception("No latent variables estimated!")
else:
# Sort/manipulate the out-of-sample data
_, X_oos = dmatrices(self.formula, oos_data)
X_oos = np.array([X_oos])[0]
X_pred = X_oos[:h]
date_index = self.shift_dates(h)
if self.latent_variables.estimation_method in ['M-H']:
sim_vector = np.zeros([15000,h])
for n in range(0, 15000):
t_z = self.draw_latent_variables(nsims=1).T[0]
_, Y, _, coefficients = self._model(t_z)
coefficients_star = coefficients.T[-1]
theta_pred = np.dot(np.array([coefficients_star]), X_pred.T)[0]
t_z = np.array([self.latent_variables.z_list[k].prior.transform(t_z[k]) for k in range(t_z.shape[0])])
model_scale, model_shape, model_skewness = self._get_scale_and_shape(t_z)
sim_vector[n,:] = self.family.draw_variable(self.link(theta_pred), model_scale, model_shape, model_skewness, theta_pred.shape[0])
sim_vector = sim_vector.T
forecasted_values = np.array([np.mean(i) for i in sim_vector])
prediction_01 = np.array([np.percentile(i, 1) for i in sim_vector])
prediction_05 = np.array([np.percentile(i, 5) for i in sim_vector])
prediction_95 = np.array([np.percentile(i, 95) for i in sim_vector])
prediction_99 = np.array([np.percentile(i, 99) for i in sim_vector])
else:
# Retrieve data, dates and (transformed) latent variables
_, Y, _, coefficients = self._model(self.latent_variables.get_z_values())
coefficients_star = coefficients.T[-1]
theta_pred = np.dot(np.array([coefficients_star]), X_pred.T)[0]
t_z = self.transform_z()
mean_values = np.append(Y, self.link(theta_pred))
model_scale, model_shape, model_skewness = self._get_scale_and_shape(t_z)
if self.model_name2 == "Skewt":
m1 = (np.sqrt(model_shape)*sp.gamma((model_shape-1.0)/2.0))/(np.sqrt(np.pi)*sp.gamma(model_shape/2.0))
forecasted_values = mean_values[-h:] + (model_skewness - (1.0/model_skewness))*model_scale*m1
else:
forecasted_values = mean_values[-h:]
if intervals is False:
result = pd.DataFrame(forecasted_values)
result.rename(columns={0:self.data_name}, inplace=True)
else:
# Get mean prediction and simulations (for errors)
if self.latent_variables.estimation_method not in ['M-H']:
sim_values = np.zeros([15000,h])
if intervals is True:
for n in range(0,15000):
sim_values[n,:] = self.family.draw_variable(self.link(theta_pred),model_scale,model_shape,model_skewness,theta_pred.shape[0])
sim_values = sim_values.T
prediction_01 = np.array([np.percentile(i, 1) for i in sim_values])
prediction_05 = np.array([np.percentile(i, 5) for i in sim_values])
prediction_95 = np.array([np.percentile(i, 95) for i in sim_values])
prediction_99 = np.array([np.percentile(i, 99) for i in sim_values])
result = pd.DataFrame([forecasted_values, prediction_01, prediction_05,
prediction_95, prediction_99]).T
result.rename(columns={0:self.data_name, 1: "1% Prediction Interval",
2: "5% Prediction Interval", 3: "95% Prediction Interval", 4: "99% Prediction Interval"},
inplace=True)
result.index = date_index[-h:]
return result
def predict_is(self, h=5, fit_once=True, fit_method='MLE', intervals=False):
""" Makes dynamic in-sample predictions with the estimated model
Parameters
----------
h : int (default : 5)
How many steps would you like to forecast?
fit_once : boolean
(default: True) Fits only once before the in-sample prediction; if False, fits after every new datapoint
fit_method : string
Which method to fit the model with
intervals : boolean (default: False)
Whether to return prediction intervals
Returns
----------
- pd.DataFrame with predicted values
"""
predictions = []
for t in range(0,h):
data1 = self.data_original.iloc[:-h+t,:]
data2 = self.data_original.iloc[-h+t:,:]
x = GASReg(formula=self.formula, data=self.data_original[:(-h+t)], family=self.family)
if fit_once is False:
x.fit(method=fit_method, printer=False)
if t == 0:
if fit_once is True:
x.fit(method=fit_method, printer=False)
saved_lvs = x.latent_variables
predictions = x.predict(h=1, oos_data=data2, intervals=intervals)
else:
if fit_once is True:
x.latent_variables = saved_lvs
predictions = pd.concat([predictions,x.predict(h=1, oos_data=data2, intervals=intervals)])
predictions.rename(columns={0:self.y_name}, inplace=True)
predictions.index = self.index[-h:]
return predictions
def sample(self, nsims=1000):
""" Samples from the posterior predictive distribution
Parameters
----------
nsims : int (default : 1000)
How many draws from the posterior predictive distribution
Returns
----------
- np.ndarray of draws from the data
"""
if self.latent_variables.estimation_method not in ['BBVI', 'M-H']:
raise Exception("No latent variables estimated!")
else:
lv_draws = self.draw_latent_variables(nsims=nsims)
mus = [self._model(lv_draws[:,i])[0] for i in range(nsims)]
model_scale, model_shape, model_skewness = self._get_scale_and_shape_sim(lv_draws)
data_draws = np.array([self.family.draw_variable(self.link(mus[i]),
np.repeat(model_scale[i], mus[i].shape[0]), np.repeat(model_shape[i], mus[i].shape[0]),
np.repeat(model_skewness[i], mus[i].shape[0]), mus[i].shape[0]) for i in range(nsims)])
return data_draws
def plot_sample(self, nsims=10, plot_data=True, **kwargs):
"""
Plots draws from the posterior predictive density against the data
Parameters
----------
nsims : int (default : 1000)
How many draws from the posterior predictive distribution
plot_data boolean
Whether to plot the data or not
"""
if self.latent_variables.estimation_method not in ['BBVI', 'M-H']:
raise Exception("No latent variables estimated!")
else:
import matplotlib.pyplot as plt
import seaborn as sns
figsize = kwargs.get('figsize',(10,7))
plt.figure(figsize=figsize)
date_index = self.index.copy()
_, Y, _, _ = self._model(self.latent_variables.get_z_values())
draws = self.sample(nsims).T
plt.plot(date_index, draws, label='Posterior Draws', alpha=1.0)
if plot_data is True:
plt.plot(date_index, Y, label='Data', c='black', alpha=0.5, linestyle='', marker='s')
plt.title(self.data_name)
plt.show()
def ppc(self, nsims=1000, T=np.mean):
""" Computes posterior predictive p-value
Parameters
----------
nsims : int (default : 1000)
How many draws for the PPC
T : function
A discrepancy measure - e.g. np.mean, np.std, np.max
Returns
----------
- float (posterior predictive p-value)
"""
if self.latent_variables.estimation_method not in ['BBVI', 'M-H']:
raise Exception("No latent variables estimated!")
else:
lv_draws = self.draw_latent_variables(nsims=nsims)
mus = [self._model(lv_draws[:,i])[0] for i in range(nsims)]
model_scale, model_shape, model_skewness = self._get_scale_and_shape_sim(lv_draws)
data_draws = np.array([self.family.draw_variable(self.link(mus[i]),
np.repeat(model_scale[i], mus[i].shape[0]), np.repeat(model_shape[i], mus[i].shape[0]),
np.repeat(model_skewness[i], mus[i].shape[0]), mus[i].shape[0]) for i in range(nsims)])
T_sims = T(self.sample(nsims=nsims), axis=1)
T_actual = T(self.data)
return len(T_sims[T_sims>T_actual])/nsims
def plot_ppc(self, nsims=1000, T=np.mean, **kwargs):
""" Plots histogram of the discrepancy from draws of the posterior
Parameters
----------
nsims : int (default : 1000)
How many draws for the PPC
T : function
A discrepancy measure - e.g. np.mean, np.std, np.max
"""
if self.latent_variables.estimation_method not in ['BBVI', 'M-H']:
raise Exception("No latent variables estimated!")
else:
import matplotlib.pyplot as plt
import seaborn as sns
figsize = kwargs.get('figsize',(10,7))
lv_draws = self.draw_latent_variables(nsims=nsims)
mus = [self._model(lv_draws[:,i])[0] for i in range(nsims)]
model_scale, model_shape, model_skewness = self._get_scale_and_shape_sim(lv_draws)
data_draws = np.array([self.family.draw_variable(self.link(mus[i]),
np.repeat(model_scale[i], mus[i].shape[0]), np.repeat(model_shape[i], mus[i].shape[0]),
np.repeat(model_skewness[i], mus[i].shape[0]), mus[i].shape[0]) for i in range(nsims)])
T_sim = T(self.sample(nsims=nsims), axis=1)
T_actual = T(self.data)
if T == np.mean:
description = " of the mean"
elif T == np.max:
description = " of the maximum"
elif T == np.min:
description = " of the minimum"
elif T == np.median:
description = " of the median"
else:
description = ""
plt.figure(figsize=figsize)
ax = plt.subplot()
ax.axvline(T_actual)
sns.distplot(T_sim, kde=False, ax=ax)
ax.set(title='Posterior predictive' + description, xlabel='T(x)', ylabel='Frequency');
plt.show() | bsd-3-clause |
dyoung418/tensorflow | tensorflow/contrib/learn/python/learn/learn_io/data_feeder.py | 15 | 31142 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Implementations of different data feeders to provide data for TF trainer."""
# TODO(ipolosukhin): Replace this module with feed-dict queue runners & queues.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import itertools
import math
import numpy as np
import six
from six.moves import xrange # pylint: disable=redefined-builtin
from tensorflow.python.framework import dtypes
from tensorflow.python.ops import array_ops
from tensorflow.python.platform import tf_logging as logging
# pylint: disable=g-multiple-import,g-bad-import-order
from .pandas_io import HAS_PANDAS, extract_pandas_data, extract_pandas_matrix, extract_pandas_labels
from .dask_io import HAS_DASK, extract_dask_data, extract_dask_labels
# pylint: enable=g-multiple-import,g-bad-import-order
def _get_in_out_shape(x_shape, y_shape, n_classes, batch_size=None):
"""Returns shape for input and output of the data feeder."""
x_is_dict, y_is_dict = isinstance(
x_shape, dict), y_shape is not None and isinstance(y_shape, dict)
if y_is_dict and n_classes is not None:
assert isinstance(n_classes, dict)
if batch_size is None:
batch_size = list(x_shape.values())[0][0] if x_is_dict else x_shape[0]
elif batch_size <= 0:
raise ValueError('Invalid batch_size %d.' % batch_size)
if x_is_dict:
input_shape = {}
for k, v in list(x_shape.items()):
input_shape[k] = [batch_size] + (list(v[1:]) if len(v) > 1 else [1])
else:
x_shape = list(x_shape[1:]) if len(x_shape) > 1 else [1]
input_shape = [batch_size] + x_shape
if y_shape is None:
return input_shape, None, batch_size
def out_el_shape(out_shape, num_classes):
out_shape = list(out_shape[1:]) if len(out_shape) > 1 else []
# Skip first dimension if it is 1.
if out_shape and out_shape[0] == 1:
out_shape = out_shape[1:]
if num_classes is not None and num_classes > 1:
return [batch_size] + out_shape + [num_classes]
else:
return [batch_size] + out_shape
if not y_is_dict:
output_shape = out_el_shape(y_shape, n_classes)
else:
output_shape = dict([
(k, out_el_shape(v, n_classes[k]
if n_classes is not None and k in n_classes else None))
for k, v in list(y_shape.items())
])
return input_shape, output_shape, batch_size
def _data_type_filter(x, y):
"""Filter data types into acceptable format."""
if HAS_DASK:
x = extract_dask_data(x)
if y is not None:
y = extract_dask_labels(y)
if HAS_PANDAS:
x = extract_pandas_data(x)
if y is not None:
y = extract_pandas_labels(y)
return x, y
def _is_iterable(x):
return hasattr(x, 'next') or hasattr(x, '__next__')
def setup_train_data_feeder(x,
y,
n_classes,
batch_size=None,
shuffle=True,
epochs=None):
"""Create data feeder, to sample inputs from dataset.
If `x` and `y` are iterators, use `StreamingDataFeeder`.
Args:
x: numpy, pandas or Dask matrix or dictionary of aforementioned. Also
supports iterables.
y: numpy, pandas or Dask array or dictionary of aforementioned. Also
supports
iterables.
n_classes: number of classes. Must be None or same type as y. In case, `y`
is `dict`
(or iterable which returns dict) such that `n_classes[key] = n_classes for
y[key]`
batch_size: size to split data into parts. Must be >= 1.
shuffle: Whether to shuffle the inputs.
epochs: Number of epochs to run.
Returns:
DataFeeder object that returns training data.
Raises:
ValueError: if one of `x` and `y` is iterable and the other is not.
"""
x, y = _data_type_filter(x, y)
if HAS_DASK:
# pylint: disable=g-import-not-at-top
import dask.dataframe as dd
if (isinstance(x, (dd.Series, dd.DataFrame)) and
(y is None or isinstance(y, (dd.Series, dd.DataFrame)))):
data_feeder_cls = DaskDataFeeder
else:
data_feeder_cls = DataFeeder
else:
data_feeder_cls = DataFeeder
if _is_iterable(x):
if y is not None and not _is_iterable(y):
raise ValueError('Both x and y should be iterators for '
'streaming learning to work.')
return StreamingDataFeeder(x, y, n_classes, batch_size)
return data_feeder_cls(
x, y, n_classes, batch_size, shuffle=shuffle, epochs=epochs)
def _batch_data(x, batch_size=None):
if (batch_size is not None) and (batch_size <= 0):
raise ValueError('Invalid batch_size %d.' % batch_size)
x_first_el = six.next(x)
x = itertools.chain([x_first_el], x)
chunk = dict([(k, []) for k in list(x_first_el.keys())]) if isinstance(
x_first_el, dict) else []
chunk_filled = False
for data in x:
if isinstance(data, dict):
for k, v in list(data.items()):
chunk[k].append(v)
if (batch_size is not None) and (len(chunk[k]) >= batch_size):
chunk[k] = np.matrix(chunk[k])
chunk_filled = True
if chunk_filled:
yield chunk
chunk = dict([(k, []) for k in list(x_first_el.keys())]) if isinstance(
x_first_el, dict) else []
chunk_filled = False
else:
chunk.append(data)
if (batch_size is not None) and (len(chunk) >= batch_size):
yield np.matrix(chunk)
chunk = []
if isinstance(x_first_el, dict):
for k, v in list(data.items()):
chunk[k] = np.matrix(chunk[k])
yield chunk
else:
yield np.matrix(chunk)
def setup_predict_data_feeder(x, batch_size=None):
"""Returns an iterable for feeding into predict step.
Args:
x: numpy, pandas, Dask array or dictionary of aforementioned. Also supports
iterable.
batch_size: Size of batches to split data into. If `None`, returns one
batch of full size.
Returns:
List or iterator (or dictionary thereof) of parts of data to predict on.
Raises:
ValueError: if `batch_size` <= 0.
"""
if HAS_DASK:
x = extract_dask_data(x)
if HAS_PANDAS:
x = extract_pandas_data(x)
if _is_iterable(x):
return _batch_data(x, batch_size)
if len(x.shape) == 1:
x = np.reshape(x, (-1, 1))
if batch_size is not None:
if batch_size <= 0:
raise ValueError('Invalid batch_size %d.' % batch_size)
n_batches = int(math.ceil(float(len(x)) / batch_size))
return [x[i * batch_size:(i + 1) * batch_size] for i in xrange(n_batches)]
return [x]
def setup_processor_data_feeder(x):
"""Sets up processor iterable.
Args:
x: numpy, pandas or iterable.
Returns:
Iterable of data to process.
"""
if HAS_PANDAS:
x = extract_pandas_matrix(x)
return x
def check_array(array, dtype):
"""Checks array on dtype and converts it if different.
Args:
array: Input array.
dtype: Expected dtype.
Returns:
Original array or converted.
"""
# skip check if array is instance of other classes, e.g. h5py.Dataset
# to avoid copying array and loading whole data into memory
if isinstance(array, (np.ndarray, list)):
array = np.array(array, dtype=dtype, order=None, copy=False)
return array
def _access(data, iloc):
"""Accesses an element from collection, using integer location based indexing.
Args:
data: array-like. The collection to access
iloc: `int` or `list` of `int`s. Location(s) to access in `collection`
Returns:
The element of `a` found at location(s) `iloc`.
"""
if HAS_PANDAS:
import pandas as pd # pylint: disable=g-import-not-at-top
if isinstance(data, pd.Series) or isinstance(data, pd.DataFrame):
return data.iloc[iloc]
return data[iloc]
def _check_dtype(dtype):
if dtypes.as_dtype(dtype) == dtypes.float64:
logging.warn(
'float64 is not supported by many models, consider casting to float32.')
return dtype
class DataFeeder(object):
"""Data feeder is an example class to sample data for TF trainer."""
def __init__(self,
x,
y,
n_classes,
batch_size=None,
shuffle=True,
random_state=None,
epochs=None):
"""Initializes a DataFeeder instance.
Args:
x: One feature sample which can either Nd numpy matrix of shape
`[n_samples, n_features, ...]` or dictionary of Nd numpy matrix.
y: label vector, either floats for regression or class id for
classification. If matrix, will consider as a sequence of labels.
Can be `None` for unsupervised setting. Also supports dictionary of
labels.
n_classes: Number of classes, 0 and 1 are considered regression, `None`
will pass through the input labels without one-hot conversion. Also, if
`y` is `dict`, then `n_classes` must be `dict` such that
`n_classes[key] = n_classes for label y[key]`, `None` otherwise.
batch_size: Mini-batch size to accumulate samples in one mini batch.
shuffle: Whether to shuffle `x`.
random_state: Numpy `RandomState` object to reproduce sampling.
epochs: Number of times to iterate over input data before raising
`StopIteration` exception.
Attributes:
x: Input features (ndarray or dictionary of ndarrays).
y: Input label (ndarray or dictionary of ndarrays).
n_classes: Number of classes (if `None`, pass through indices without
one-hot conversion).
batch_size: Mini-batch size to accumulate.
input_shape: Shape of the input (or dictionary of shapes).
output_shape: Shape of the output (or dictionary of shapes).
input_dtype: DType of input (or dictionary of shapes).
output_dtype: DType of output (or dictionary of shapes.
"""
x_is_dict, y_is_dict = isinstance(x, dict), y is not None and isinstance(
y, dict)
if isinstance(y, list):
y = np.array(y)
self._x = dict([(k, check_array(v, v.dtype)) for k, v in list(x.items())
]) if x_is_dict else check_array(x, x.dtype)
self._y = None if y is None else (
dict([(k, check_array(v, v.dtype)) for k, v in list(y.items())])
if y_is_dict else check_array(y, y.dtype))
# self.n_classes is not None means we're converting raw target indices
# to one-hot.
if n_classes is not None:
if not y_is_dict:
y_dtype = (np.int64
if n_classes is not None and n_classes > 1 else np.float32)
self._y = (None if y is None else check_array(y, dtype=y_dtype))
self.n_classes = n_classes
self.max_epochs = epochs
x_shape = dict([(k, v.shape) for k, v in list(self._x.items())
]) if x_is_dict else self._x.shape
y_shape = dict([(k, v.shape) for k, v in list(self._y.items())
]) if y_is_dict else None if y is None else self._y.shape
self.input_shape, self.output_shape, self._batch_size = _get_in_out_shape(
x_shape, y_shape, n_classes, batch_size)
# Input dtype matches dtype of x.
self._input_dtype = (
dict([(k, _check_dtype(v.dtype)) for k, v in list(self._x.items())])
if x_is_dict else _check_dtype(self._x.dtype))
# self._output_dtype == np.float32 when y is None
self._output_dtype = (
dict([(k, _check_dtype(v.dtype)) for k, v in list(self._y.items())])
if y_is_dict else (
_check_dtype(self._y.dtype) if y is not None else np.float32))
# self.n_classes is None means we're passing in raw target indices
if n_classes is not None and y_is_dict:
for key in list(n_classes.keys()):
if key in self._output_dtype:
self._output_dtype[key] = np.float32
self._shuffle = shuffle
self.random_state = np.random.RandomState(
42) if random_state is None else random_state
num_samples = list(self._x.values())[0].shape[
0] if x_is_dict else self._x.shape[0]
if self._shuffle:
self.indices = self.random_state.permutation(num_samples)
else:
self.indices = np.array(range(num_samples))
self.offset = 0
self.epoch = 0
self._epoch_placeholder = None
@property
def x(self):
return self._x
@property
def y(self):
return self._y
@property
def shuffle(self):
return self._shuffle
@property
def input_dtype(self):
return self._input_dtype
@property
def output_dtype(self):
return self._output_dtype
@property
def batch_size(self):
return self._batch_size
def make_epoch_variable(self):
"""Adds a placeholder variable for the epoch to the graph.
Returns:
The epoch placeholder.
"""
self._epoch_placeholder = array_ops.placeholder(
dtypes.int32, [1], name='epoch')
return self._epoch_placeholder
def input_builder(self):
"""Builds inputs in the graph.
Returns:
Two placeholders for inputs and outputs.
"""
def get_placeholder(shape, dtype, name_prepend):
if shape is None:
return None
if isinstance(shape, dict):
placeholder = {}
for key in list(shape.keys()):
placeholder[key] = array_ops.placeholder(
dtypes.as_dtype(dtype[key]), [None] + shape[key][1:],
name=name_prepend + '_' + key)
else:
placeholder = array_ops.placeholder(
dtypes.as_dtype(dtype), [None] + shape[1:], name=name_prepend)
return placeholder
self._input_placeholder = get_placeholder(self.input_shape,
self._input_dtype, 'input')
self._output_placeholder = get_placeholder(self.output_shape,
self._output_dtype, 'output')
return self._input_placeholder, self._output_placeholder
def set_placeholders(self, input_placeholder, output_placeholder):
"""Sets placeholders for this data feeder.
Args:
input_placeholder: Placeholder for `x` variable. Should match shape
of the examples in the x dataset.
output_placeholder: Placeholder for `y` variable. Should match
shape of the examples in the y dataset. Can be `None`.
"""
self._input_placeholder = input_placeholder
self._output_placeholder = output_placeholder
def get_feed_params(self):
"""Function returns a `dict` with data feed params while training.
Returns:
A `dict` with data feed params while training.
"""
return {
'epoch': self.epoch,
'offset': self.offset,
'batch_size': self._batch_size
}
def get_feed_dict_fn(self):
"""Returns a function that samples data into given placeholders.
Returns:
A function that when called samples a random subset of batch size
from `x` and `y`.
"""
x_is_dict, y_is_dict = isinstance(
self._x, dict), self._y is not None and isinstance(self._y, dict)
# Assign input features from random indices.
def extract(data, indices):
return (np.array(_access(data, indices)).reshape((indices.shape[0], 1)) if
len(data.shape) == 1 else _access(data, indices))
# assign labels from random indices
def assign_label(data, shape, dtype, n_classes, indices):
shape[0] = indices.shape[0]
out = np.zeros(shape, dtype=dtype)
for i in xrange(out.shape[0]):
sample = indices[i]
# self.n_classes is None means we're passing in raw target indices
if n_classes is None:
out[i] = _access(data, sample)
else:
if n_classes > 1:
if len(shape) == 2:
out.itemset((i, int(_access(data, sample))), 1.0)
else:
for idx, value in enumerate(_access(data, sample)):
out.itemset(tuple([i, idx, value]), 1.0)
else:
out[i] = _access(data, sample)
return out
def _feed_dict_fn():
"""Function that samples data into given placeholders."""
if self.max_epochs is not None and self.epoch + 1 > self.max_epochs:
raise StopIteration
assert self._input_placeholder is not None
feed_dict = {}
if self._epoch_placeholder is not None:
feed_dict[self._epoch_placeholder.name] = [self.epoch]
# Take next batch of indices.
x_len = list(self._x.values())[0].shape[
0] if x_is_dict else self._x.shape[0]
end = min(x_len, self.offset + self._batch_size)
batch_indices = self.indices[self.offset:end]
# adding input placeholder
feed_dict.update(
dict([(self._input_placeholder[k].name, extract(v, batch_indices))
for k, v in list(self._x.items())]) if x_is_dict else
{self._input_placeholder.name: extract(self._x, batch_indices)})
# move offset and reset it if necessary
self.offset += self._batch_size
if self.offset >= x_len:
self.indices = self.random_state.permutation(
x_len) if self._shuffle else np.array(range(x_len))
self.offset = 0
self.epoch += 1
# return early if there are no labels
if self._output_placeholder is None:
return feed_dict
# adding output placeholders
if y_is_dict:
for k, v in list(self._y.items()):
n_classes = (self.n_classes[k] if k in self.n_classes else
None) if self.n_classes is not None else None
shape, dtype = self.output_shape[k], self._output_dtype[k]
feed_dict.update({
self._output_placeholder[k].name:
assign_label(v, shape, dtype, n_classes, batch_indices)
})
else:
shape, dtype, n_classes = self.output_shape, self._output_dtype, self.n_classes
feed_dict.update({
self._output_placeholder.name:
assign_label(self._y, shape, dtype, n_classes, batch_indices)
})
return feed_dict
return _feed_dict_fn
class StreamingDataFeeder(DataFeeder):
"""Data feeder for TF trainer that reads data from iterator.
Streaming data feeder allows to read data as it comes it from disk or
somewhere else. It's custom to have this iterators rotate infinetly over
the dataset, to allow control of how much to learn on the trainer side.
"""
def __init__(self, x, y, n_classes, batch_size):
"""Initializes a StreamingDataFeeder instance.
Args:
x: iterator each element of which returns one feature sample. Sample can
be a Nd numpy matrix or dictionary of Nd numpy matrices.
y: iterator each element of which returns one label sample. Sample can be
a Nd numpy matrix or dictionary of Nd numpy matrices with 1 or many
classes regression values.
n_classes: indicator of how many classes the corresponding label sample
has for the purposes of one-hot conversion of label. In case where `y`
is a dictionary, `n_classes` must be dictionary (with same keys as `y`)
of how many classes there are in each label in `y`. If key is
present in `y` and missing in `n_classes`, the value is assumed `None`
and no one-hot conversion will be applied to the label with that key.
batch_size: Mini batch size to accumulate samples in one batch. If set
`None`, then assumes that iterator to return already batched element.
Attributes:
x: input features (or dictionary of input features).
y: input label (or dictionary of output features).
n_classes: number of classes.
batch_size: mini batch size to accumulate.
input_shape: shape of the input (can be dictionary depending on `x`).
output_shape: shape of the output (can be dictionary depending on `y`).
input_dtype: dtype of input (can be dictionary depending on `x`).
output_dtype: dtype of output (can be dictionary depending on `y`).
"""
# pylint: disable=invalid-name,super-init-not-called
x_first_el = six.next(x)
self._x = itertools.chain([x_first_el], x)
if y is not None:
y_first_el = six.next(y)
self._y = itertools.chain([y_first_el], y)
else:
y_first_el = None
self._y = None
self.n_classes = n_classes
x_is_dict = isinstance(x_first_el, dict)
y_is_dict = y is not None and isinstance(y_first_el, dict)
if y_is_dict and n_classes is not None:
assert isinstance(n_classes, dict)
# extract shapes for first_elements
if x_is_dict:
x_first_el_shape = dict(
[(k, [1] + list(v.shape)) for k, v in list(x_first_el.items())])
else:
x_first_el_shape = [1] + list(x_first_el.shape)
if y_is_dict:
y_first_el_shape = dict(
[(k, [1] + list(v.shape)) for k, v in list(y_first_el.items())])
elif y is None:
y_first_el_shape = None
else:
y_first_el_shape = ([1] + list(y_first_el[0].shape if isinstance(
y_first_el, list) else y_first_el.shape))
self.input_shape, self.output_shape, self._batch_size = _get_in_out_shape(
x_first_el_shape, y_first_el_shape, n_classes, batch_size)
# Input dtype of x_first_el.
if x_is_dict:
self._input_dtype = dict(
[(k, _check_dtype(v.dtype)) for k, v in list(x_first_el.items())])
else:
self._input_dtype = _check_dtype(x_first_el.dtype)
# Output dtype of y_first_el.
def check_y_dtype(el):
if isinstance(el, np.ndarray):
return el.dtype
elif isinstance(el, list):
return check_y_dtype(el[0])
else:
return _check_dtype(np.dtype(type(el)))
# Output types are floats, due to both softmaxes and regression req.
if n_classes is not None and (y is None or not y_is_dict) and n_classes > 0:
self._output_dtype = np.float32
elif y_is_dict:
self._output_dtype = dict(
[(k, check_y_dtype(v)) for k, v in list(y_first_el.items())])
elif y is None:
self._output_dtype = None
else:
self._output_dtype = check_y_dtype(y_first_el)
def get_feed_params(self):
"""Function returns a `dict` with data feed params while training.
Returns:
A `dict` with data feed params while training.
"""
return {'batch_size': self._batch_size}
def get_feed_dict_fn(self):
"""Returns a function, that will sample data and provide it to placeholders.
Returns:
A function that when called samples a random subset of batch size
from x and y.
"""
self.stopped = False
def _feed_dict_fn():
"""Samples data and provides it to placeholders.
Returns:
`dict` of input and output tensors.
"""
def init_array(shape, dtype):
"""Initialize array of given shape or dict of shapes and dtype."""
if shape is None:
return None
elif isinstance(shape, dict):
return dict([(k, np.zeros(shape[k], dtype[k]))
for k in list(shape.keys())])
else:
return np.zeros(shape, dtype=dtype)
def put_data_array(dest, index, source=None, n_classes=None):
"""Puts data array into container."""
if source is None:
dest = dest[:index]
elif n_classes is not None and n_classes > 1:
if len(self.output_shape) == 2:
dest.itemset((index, source), 1.0)
else:
for idx, value in enumerate(source):
dest.itemset(tuple([index, idx, value]), 1.0)
else:
if len(dest.shape) > 1:
dest[index, :] = source
else:
dest[index] = source[0] if isinstance(source, list) else source
return dest
def put_data_array_or_dict(holder, index, data=None, n_classes=None):
"""Puts data array or data dictionary into container."""
if holder is None:
return None
if isinstance(holder, dict):
if data is None:
data = {k: None for k in holder.keys()}
assert isinstance(data, dict)
for k in holder.keys():
num_classes = n_classes[k] if (n_classes is not None and
k in n_classes) else None
holder[k] = put_data_array(holder[k], index, data[k], num_classes)
else:
holder = put_data_array(holder, index, data, n_classes)
return holder
if self.stopped:
raise StopIteration
inp = init_array(self.input_shape, self._input_dtype)
out = init_array(self.output_shape, self._output_dtype)
for i in xrange(self._batch_size):
# Add handling when queue ends.
try:
next_inp = six.next(self._x)
inp = put_data_array_or_dict(inp, i, next_inp, None)
except StopIteration:
self.stopped = True
if i == 0:
raise
inp = put_data_array_or_dict(inp, i, None, None)
out = put_data_array_or_dict(out, i, None, None)
break
if self._y is not None:
next_out = six.next(self._y)
out = put_data_array_or_dict(out, i, next_out, self.n_classes)
# creating feed_dict
if isinstance(inp, dict):
feed_dict = dict([(self._input_placeholder[k].name, inp[k])
for k in list(self._input_placeholder.keys())])
else:
feed_dict = {self._input_placeholder.name: inp}
if self._y is not None:
if isinstance(out, dict):
feed_dict.update(
dict([(self._output_placeholder[k].name, out[k])
for k in list(self._output_placeholder.keys())]))
else:
feed_dict.update({self._output_placeholder.name: out})
return feed_dict
return _feed_dict_fn
class DaskDataFeeder(object):
"""Data feeder for that reads data from dask.Series and dask.DataFrame.
Numpy arrays can be serialized to disk and it's possible to do random seeks
into them. DaskDataFeeder will remove requirement to have full dataset in the
memory and still do random seeks for sampling of batches.
"""
def __init__(self,
x,
y,
n_classes,
batch_size,
shuffle=True,
random_state=None,
epochs=None):
"""Initializes a DaskDataFeeder instance.
Args:
x: iterator that returns for each element, returns features.
y: iterator that returns for each element, returns 1 or many classes /
regression values.
n_classes: indicator of how many classes the label has.
batch_size: Mini batch size to accumulate.
shuffle: Whether to shuffle the inputs.
random_state: random state for RNG. Note that it will mutate so use a
int value for this if you want consistent sized batches.
epochs: Number of epochs to run.
Attributes:
x: input features.
y: input label.
n_classes: number of classes.
batch_size: mini batch size to accumulate.
input_shape: shape of the input.
output_shape: shape of the output.
input_dtype: dtype of input.
output_dtype: dtype of output.
Raises:
ValueError: if `x` or `y` are `dict`, as they are not supported currently.
"""
if isinstance(x, dict) or isinstance(y, dict):
raise ValueError(
'DaskDataFeeder does not support dictionaries at the moment.')
# pylint: disable=invalid-name,super-init-not-called
import dask.dataframe as dd # pylint: disable=g-import-not-at-top
# TODO(terrytangyuan): check x and y dtypes in dask_io like pandas
self._x = x
self._y = y
# save column names
self._x_columns = list(x.columns)
if isinstance(y.columns[0], str):
self._y_columns = list(y.columns)
else:
# deal with cases where two DFs have overlapped default numeric colnames
self._y_columns = len(self._x_columns) + 1
self._y = self._y.rename(columns={y.columns[0]: self._y_columns})
# TODO(terrytangyuan): deal with unsupervised cases
# combine into a data frame
self.df = dd.multi.concat([self._x, self._y], axis=1)
self.n_classes = n_classes
x_count = x.count().compute()[0]
x_shape = (x_count, len(self._x.columns))
y_shape = (x_count, len(self._y.columns))
# TODO(terrytangyuan): Add support for shuffle and epochs.
self._shuffle = shuffle
self.epochs = epochs
self.input_shape, self.output_shape, self._batch_size = _get_in_out_shape(
x_shape, y_shape, n_classes, batch_size)
self.sample_fraction = self._batch_size / float(x_count)
self._input_dtype = _check_dtype(self._x.dtypes[0])
self._output_dtype = _check_dtype(self._y.dtypes[self._y_columns])
if random_state is None:
self.random_state = 66
else:
self.random_state = random_state
def get_feed_params(self):
"""Function returns a `dict` with data feed params while training.
Returns:
A `dict` with data feed params while training.
"""
return {'batch_size': self._batch_size}
def get_feed_dict_fn(self, input_placeholder, output_placeholder):
"""Returns a function, that will sample data and provide it to placeholders.
Args:
input_placeholder: tf.Placeholder for input features mini batch.
output_placeholder: tf.Placeholder for output labels.
Returns:
A function that when called samples a random subset of batch size
from x and y.
"""
def _feed_dict_fn():
"""Samples data and provides it to placeholders."""
# TODO(ipolosukhin): option for with/without replacement (dev version of
# dask)
sample = self.df.random_split(
[self.sample_fraction, 1 - self.sample_fraction],
random_state=self.random_state)
inp = extract_pandas_matrix(sample[0][self._x_columns].compute()).tolist()
out = extract_pandas_matrix(sample[0][self._y_columns].compute())
# convert to correct dtype
inp = np.array(inp, dtype=self._input_dtype)
# one-hot encode out for each class for cross entropy loss
if HAS_PANDAS:
import pandas as pd # pylint: disable=g-import-not-at-top
if not isinstance(out, pd.Series):
out = out.flatten()
out_max = self._y.max().compute().values[0]
encoded_out = np.zeros((out.size, out_max + 1), dtype=self._output_dtype)
encoded_out[np.arange(out.size), out] = 1
return {input_placeholder.name: inp, output_placeholder.name: encoded_out}
return _feed_dict_fn
| apache-2.0 |
ky822/scikit-learn | sklearn/externals/joblib/__init__.py | 72 | 4795 | """ Joblib is a set of tools to provide **lightweight pipelining in
Python**. In particular, joblib offers:
1. transparent disk-caching of the output values and lazy re-evaluation
(memoize pattern)
2. easy simple parallel computing
3. logging and tracing of the execution
Joblib is optimized to be **fast** and **robust** in particular on large
data and has specific optimizations for `numpy` arrays. It is
**BSD-licensed**.
============================== ============================================
**User documentation**: http://pythonhosted.org/joblib
**Download packages**: http://pypi.python.org/pypi/joblib#downloads
**Source code**: http://github.com/joblib/joblib
**Report issues**: http://github.com/joblib/joblib/issues
============================== ============================================
Vision
--------
The vision is to provide tools to easily achieve better performance and
reproducibility when working with long running jobs.
* **Avoid computing twice the same thing**: code is rerun over an
over, for instance when prototyping computational-heavy jobs (as in
scientific development), but hand-crafted solution to alleviate this
issue is error-prone and often leads to unreproducible results
* **Persist to disk transparently**: persisting in an efficient way
arbitrary objects containing large data is hard. Using
joblib's caching mechanism avoids hand-written persistence and
implicitly links the file on disk to the execution context of
the original Python object. As a result, joblib's persistence is
good for resuming an application status or computational job, eg
after a crash.
Joblib strives to address these problems while **leaving your code and
your flow control as unmodified as possible** (no framework, no new
paradigms).
Main features
------------------
1) **Transparent and fast disk-caching of output value:** a memoize or
make-like functionality for Python functions that works well for
arbitrary Python objects, including very large numpy arrays. Separate
persistence and flow-execution logic from domain logic or algorithmic
code by writing the operations as a set of steps with well-defined
inputs and outputs: Python functions. Joblib can save their
computation to disk and rerun it only if necessary::
>>> import numpy as np
>>> from sklearn.externals.joblib import Memory
>>> mem = Memory(cachedir='/tmp/joblib')
>>> import numpy as np
>>> a = np.vander(np.arange(3)).astype(np.float)
>>> square = mem.cache(np.square)
>>> b = square(a) # doctest: +ELLIPSIS
________________________________________________________________________________
[Memory] Calling square...
square(array([[ 0., 0., 1.],
[ 1., 1., 1.],
[ 4., 2., 1.]]))
___________________________________________________________square - 0...s, 0.0min
>>> c = square(a)
>>> # The above call did not trigger an evaluation
2) **Embarrassingly parallel helper:** to make is easy to write readable
parallel code and debug it quickly::
>>> from sklearn.externals.joblib import Parallel, delayed
>>> from math import sqrt
>>> Parallel(n_jobs=1)(delayed(sqrt)(i**2) for i in range(10))
[0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0]
3) **Logging/tracing:** The different functionalities will
progressively acquire better logging mechanism to help track what
has been ran, and capture I/O easily. In addition, Joblib will
provide a few I/O primitives, to easily define define logging and
display streams, and provide a way of compiling a report.
We want to be able to quickly inspect what has been run.
4) **Fast compressed Persistence**: a replacement for pickle to work
efficiently on Python objects containing large data (
*joblib.dump* & *joblib.load* ).
..
>>> import shutil ; shutil.rmtree('/tmp/joblib/')
"""
# PEP0440 compatible formatted version, see:
# https://www.python.org/dev/peps/pep-0440/
#
# Generic release markers:
# X.Y
# X.Y.Z # For bugfix releases
#
# Admissible pre-release markers:
# X.YaN # Alpha release
# X.YbN # Beta release
# X.YrcN # Release Candidate
# X.Y # Final release
#
# Dev branch marker is: 'X.Y.dev' or 'X.Y.devN' where N is an integer.
# 'X.Y.dev0' is the canonical version of 'X.Y.dev'
#
__version__ = '0.9.0b4'
from .memory import Memory, MemorizedResult
from .logger import PrintTime
from .logger import Logger
from .hashing import hash
from .numpy_pickle import dump
from .numpy_pickle import load
from .parallel import Parallel
from .parallel import delayed
from .parallel import cpu_count
| bsd-3-clause |
pulinagrawal/nupic | external/linux32/lib/python2.6/site-packages/matplotlib/text.py | 69 | 55366 | """
Classes for including text in a figure.
"""
from __future__ import division
import math
import numpy as np
from matplotlib import cbook
from matplotlib import rcParams
import artist
from artist import Artist
from cbook import is_string_like, maxdict
from font_manager import FontProperties
from patches import bbox_artist, YAArrow, FancyBboxPatch, \
FancyArrowPatch, Rectangle
import transforms as mtransforms
from transforms import Affine2D, Bbox
from lines import Line2D
import matplotlib.nxutils as nxutils
def _process_text_args(override, fontdict=None, **kwargs):
"Return an override dict. See :func:`~pyplot.text' docstring for info"
if fontdict is not None:
override.update(fontdict)
override.update(kwargs)
return override
# Extracted from Text's method to serve as a function
def get_rotation(rotation):
"""
Return the text angle as float.
*rotation* may be 'horizontal', 'vertical', or a numeric value in degrees.
"""
if rotation in ('horizontal', None):
angle = 0.
elif rotation == 'vertical':
angle = 90.
else:
angle = float(rotation)
return angle%360
# these are not available for the object inspector until after the
# class is build so we define an initial set here for the init
# function and they will be overridden after object defn
artist.kwdocd['Text'] = """
========================== =========================================================================
Property Value
========================== =========================================================================
alpha float
animated [True | False]
backgroundcolor any matplotlib color
bbox rectangle prop dict plus key 'pad' which is a pad in points
clip_box a matplotlib.transform.Bbox instance
clip_on [True | False]
color any matplotlib color
family [ 'serif' | 'sans-serif' | 'cursive' | 'fantasy' | 'monospace' ]
figure a matplotlib.figure.Figure instance
fontproperties a matplotlib.font_manager.FontProperties instance
horizontalalignment or ha [ 'center' | 'right' | 'left' ]
label any string
linespacing float
lod [True | False]
multialignment ['left' | 'right' | 'center' ]
name or fontname string eg, ['Sans' | 'Courier' | 'Helvetica' ...]
position (x,y)
rotation [ angle in degrees 'vertical' | 'horizontal'
size or fontsize [ size in points | relative size eg 'smaller', 'x-large' ]
style or fontstyle [ 'normal' | 'italic' | 'oblique']
text string
transform a matplotlib.transform transformation instance
variant [ 'normal' | 'small-caps' ]
verticalalignment or va [ 'center' | 'top' | 'bottom' | 'baseline' ]
visible [True | False]
weight or fontweight [ 'normal' | 'bold' | 'heavy' | 'light' | 'ultrabold' | 'ultralight']
x float
y float
zorder any number
========================== =========================================================================
"""
# TODO : This function may move into the Text class as a method. As a
# matter of fact, The information from the _get_textbox function
# should be available during the Text._get_layout() call, which is
# called within the _get_textbox. So, it would better to move this
# function as a method with some refactoring of _get_layout method.
def _get_textbox(text, renderer):
"""
Calculate the bounding box of the text. Unlike
:meth:`matplotlib.text.Text.get_extents` method, The bbox size of
the text before the rotation is calculated.
"""
projected_xs = []
projected_ys = []
theta = text.get_rotation()/180.*math.pi
tr = mtransforms.Affine2D().rotate(-theta)
for t, wh, x, y in text._get_layout(renderer)[1]:
w, h = wh
xt1, yt1 = tr.transform_point((x, y))
xt2, yt2 = xt1+w, yt1+h
projected_xs.extend([xt1, xt2])
projected_ys.extend([yt1, yt2])
xt_box, yt_box = min(projected_xs), min(projected_ys)
w_box, h_box = max(projected_xs) - xt_box, max(projected_ys) - yt_box
tr = mtransforms.Affine2D().rotate(theta)
x_box, y_box = tr.transform_point((xt_box, yt_box))
return x_box, y_box, w_box, h_box
class Text(Artist):
"""
Handle storing and drawing of text in window or data coordinates.
"""
zorder = 3
def __str__(self):
return "Text(%g,%g,%s)"%(self._y,self._y,repr(self._text))
def __init__(self,
x=0, y=0, text='',
color=None, # defaults to rc params
verticalalignment='bottom',
horizontalalignment='left',
multialignment=None,
fontproperties=None, # defaults to FontProperties()
rotation=None,
linespacing=None,
**kwargs
):
"""
Create a :class:`~matplotlib.text.Text` instance at *x*, *y*
with string *text*.
Valid kwargs are
%(Text)s
"""
Artist.__init__(self)
self.cached = maxdict(5)
self._x, self._y = x, y
if color is None: color = rcParams['text.color']
if fontproperties is None: fontproperties=FontProperties()
elif is_string_like(fontproperties): fontproperties=FontProperties(fontproperties)
self.set_text(text)
self.set_color(color)
self._verticalalignment = verticalalignment
self._horizontalalignment = horizontalalignment
self._multialignment = multialignment
self._rotation = rotation
self._fontproperties = fontproperties
self._bbox = None
self._bbox_patch = None # a FancyBboxPatch instance
self._renderer = None
if linespacing is None:
linespacing = 1.2 # Maybe use rcParam later.
self._linespacing = linespacing
self.update(kwargs)
#self.set_bbox(dict(pad=0))
def contains(self,mouseevent):
"""Test whether the mouse event occurred in the patch.
In the case of text, a hit is true anywhere in the
axis-aligned bounding-box containing the text.
Returns True or False.
"""
if callable(self._contains): return self._contains(self,mouseevent)
if not self.get_visible() or self._renderer is None:
return False,{}
l,b,w,h = self.get_window_extent().bounds
r = l+w
t = b+h
xyverts = (l,b), (l, t), (r, t), (r, b)
x, y = mouseevent.x, mouseevent.y
inside = nxutils.pnpoly(x, y, xyverts)
return inside,{}
def _get_xy_display(self):
'get the (possibly unit converted) transformed x, y in display coords'
x, y = self.get_position()
return self.get_transform().transform_point((x,y))
def _get_multialignment(self):
if self._multialignment is not None: return self._multialignment
else: return self._horizontalalignment
def get_rotation(self):
'return the text angle as float in degrees'
return get_rotation(self._rotation) # string_or_number -> number
def update_from(self, other):
'Copy properties from other to self'
Artist.update_from(self, other)
self._color = other._color
self._multialignment = other._multialignment
self._verticalalignment = other._verticalalignment
self._horizontalalignment = other._horizontalalignment
self._fontproperties = other._fontproperties.copy()
self._rotation = other._rotation
self._picker = other._picker
self._linespacing = other._linespacing
def _get_layout(self, renderer):
key = self.get_prop_tup()
if key in self.cached: return self.cached[key]
horizLayout = []
thisx, thisy = 0.0, 0.0
xmin, ymin = 0.0, 0.0
width, height = 0.0, 0.0
lines = self._text.split('\n')
whs = np.zeros((len(lines), 2))
horizLayout = np.zeros((len(lines), 4))
# Find full vertical extent of font,
# including ascenders and descenders:
tmp, heightt, bl = renderer.get_text_width_height_descent(
'lp', self._fontproperties, ismath=False)
offsety = heightt * self._linespacing
baseline = None
for i, line in enumerate(lines):
clean_line, ismath = self.is_math_text(line)
w, h, d = renderer.get_text_width_height_descent(
clean_line, self._fontproperties, ismath=ismath)
if baseline is None:
baseline = h - d
whs[i] = w, h
horizLayout[i] = thisx, thisy, w, h
thisy -= offsety
width = max(width, w)
ymin = horizLayout[-1][1]
ymax = horizLayout[0][1] + horizLayout[0][3]
height = ymax-ymin
xmax = xmin + width
# get the rotation matrix
M = Affine2D().rotate_deg(self.get_rotation())
offsetLayout = np.zeros((len(lines), 2))
offsetLayout[:] = horizLayout[:, 0:2]
# now offset the individual text lines within the box
if len(lines)>1: # do the multiline aligment
malign = self._get_multialignment()
if malign == 'center':
offsetLayout[:, 0] += width/2.0 - horizLayout[:, 2] / 2.0
elif malign == 'right':
offsetLayout[:, 0] += width - horizLayout[:, 2]
# the corners of the unrotated bounding box
cornersHoriz = np.array(
[(xmin, ymin), (xmin, ymax), (xmax, ymax), (xmax, ymin)],
np.float_)
# now rotate the bbox
cornersRotated = M.transform(cornersHoriz)
txs = cornersRotated[:, 0]
tys = cornersRotated[:, 1]
# compute the bounds of the rotated box
xmin, xmax = txs.min(), txs.max()
ymin, ymax = tys.min(), tys.max()
width = xmax - xmin
height = ymax - ymin
# Now move the box to the targe position offset the display bbox by alignment
halign = self._horizontalalignment
valign = self._verticalalignment
# compute the text location in display coords and the offsets
# necessary to align the bbox with that location
if halign=='center': offsetx = (xmin + width/2.0)
elif halign=='right': offsetx = (xmin + width)
else: offsetx = xmin
if valign=='center': offsety = (ymin + height/2.0)
elif valign=='top': offsety = (ymin + height)
elif valign=='baseline': offsety = (ymin + height) - baseline
else: offsety = ymin
xmin -= offsetx
ymin -= offsety
bbox = Bbox.from_bounds(xmin, ymin, width, height)
# now rotate the positions around the first x,y position
xys = M.transform(offsetLayout)
xys -= (offsetx, offsety)
xs, ys = xys[:, 0], xys[:, 1]
ret = bbox, zip(lines, whs, xs, ys)
self.cached[key] = ret
return ret
def set_bbox(self, rectprops):
"""
Draw a bounding box around self. rectprops are any settable
properties for a rectangle, eg facecolor='red', alpha=0.5.
t.set_bbox(dict(facecolor='red', alpha=0.5))
If rectprops has "boxstyle" key. A FancyBboxPatch
is initialized with rectprops and will be drawn. The mutation
scale of the FancyBboxPath is set to the fontsize.
ACCEPTS: rectangle prop dict
"""
# The self._bbox_patch object is created only if rectprops has
# boxstyle key. Otherwise, self._bbox will be set to the
# rectprops and the bbox will be drawn using bbox_artist
# function. This is to keep the backward compatibility.
if rectprops is not None and "boxstyle" in rectprops:
props = rectprops.copy()
boxstyle = props.pop("boxstyle")
bbox_transmuter = props.pop("bbox_transmuter", None)
self._bbox_patch = FancyBboxPatch((0., 0.),
1., 1.,
boxstyle=boxstyle,
bbox_transmuter=bbox_transmuter,
transform=mtransforms.IdentityTransform(),
**props)
self._bbox = None
else:
self._bbox_patch = None
self._bbox = rectprops
def get_bbox_patch(self):
"""
Return the bbox Patch object. Returns None if the the
FancyBboxPatch is not made.
"""
return self._bbox_patch
def update_bbox_position_size(self, renderer):
"""
Update the location and the size of the bbox. This method
should be used when the position and size of the bbox needs to
be updated before actually drawing the bbox.
"""
# For arrow_patch, use textbox as patchA by default.
if not isinstance(self.arrow_patch, FancyArrowPatch):
return
if self._bbox_patch:
trans = self.get_transform()
# don't use self.get_position here, which refers to text position
# in Text, and dash position in TextWithDash:
posx = float(self.convert_xunits(self._x))
posy = float(self.convert_yunits(self._y))
posx, posy = trans.transform_point((posx, posy))
x_box, y_box, w_box, h_box = _get_textbox(self, renderer)
self._bbox_patch.set_bounds(0., 0.,
w_box, h_box)
theta = self.get_rotation()/180.*math.pi
tr = mtransforms.Affine2D().rotate(theta)
tr = tr.translate(posx+x_box, posy+y_box)
self._bbox_patch.set_transform(tr)
fontsize_in_pixel = renderer.points_to_pixels(self.get_size())
self._bbox_patch.set_mutation_scale(fontsize_in_pixel)
#self._bbox_patch.draw(renderer)
else:
props = self._bbox
if props is None: props = {}
props = props.copy() # don't want to alter the pad externally
pad = props.pop('pad', 4)
pad = renderer.points_to_pixels(pad)
bbox = self.get_window_extent(renderer)
l,b,w,h = bbox.bounds
l-=pad/2.
b-=pad/2.
w+=pad
h+=pad
r = Rectangle(xy=(l,b),
width=w,
height=h,
)
r.set_transform(mtransforms.IdentityTransform())
r.set_clip_on( False )
r.update(props)
self.arrow_patch.set_patchA(r)
def _draw_bbox(self, renderer, posx, posy):
""" Update the location and the size of the bbox
(FancyBoxPatch), and draw
"""
x_box, y_box, w_box, h_box = _get_textbox(self, renderer)
self._bbox_patch.set_bounds(0., 0.,
w_box, h_box)
theta = self.get_rotation()/180.*math.pi
tr = mtransforms.Affine2D().rotate(theta)
tr = tr.translate(posx+x_box, posy+y_box)
self._bbox_patch.set_transform(tr)
fontsize_in_pixel = renderer.points_to_pixels(self.get_size())
self._bbox_patch.set_mutation_scale(fontsize_in_pixel)
self._bbox_patch.draw(renderer)
def draw(self, renderer):
"""
Draws the :class:`Text` object to the given *renderer*.
"""
if renderer is not None:
self._renderer = renderer
if not self.get_visible(): return
if self._text=='': return
bbox, info = self._get_layout(renderer)
trans = self.get_transform()
# don't use self.get_position here, which refers to text position
# in Text, and dash position in TextWithDash:
posx = float(self.convert_xunits(self._x))
posy = float(self.convert_yunits(self._y))
posx, posy = trans.transform_point((posx, posy))
canvasw, canvash = renderer.get_canvas_width_height()
# draw the FancyBboxPatch
if self._bbox_patch:
self._draw_bbox(renderer, posx, posy)
gc = renderer.new_gc()
gc.set_foreground(self._color)
gc.set_alpha(self._alpha)
gc.set_url(self._url)
if self.get_clip_on():
gc.set_clip_rectangle(self.clipbox)
if self._bbox:
bbox_artist(self, renderer, self._bbox)
angle = self.get_rotation()
if rcParams['text.usetex']:
for line, wh, x, y in info:
x = x + posx
y = y + posy
if renderer.flipy():
y = canvash-y
clean_line, ismath = self.is_math_text(line)
renderer.draw_tex(gc, x, y, clean_line,
self._fontproperties, angle)
return
for line, wh, x, y in info:
x = x + posx
y = y + posy
if renderer.flipy():
y = canvash-y
clean_line, ismath = self.is_math_text(line)
renderer.draw_text(gc, x, y, clean_line,
self._fontproperties, angle,
ismath=ismath)
def get_color(self):
"Return the color of the text"
return self._color
def get_fontproperties(self):
"Return the :class:`~font_manager.FontProperties` object"
return self._fontproperties
def get_font_properties(self):
'alias for get_fontproperties'
return self.get_fontproperties
def get_family(self):
"Return the list of font families used for font lookup"
return self._fontproperties.get_family()
def get_fontfamily(self):
'alias for get_family'
return self.get_family()
def get_name(self):
"Return the font name as string"
return self._fontproperties.get_name()
def get_style(self):
"Return the font style as string"
return self._fontproperties.get_style()
def get_size(self):
"Return the font size as integer"
return self._fontproperties.get_size_in_points()
def get_variant(self):
"Return the font variant as a string"
return self._fontproperties.get_variant()
def get_fontvariant(self):
'alias for get_variant'
return self.get_variant()
def get_weight(self):
"Get the font weight as string or number"
return self._fontproperties.get_weight()
def get_fontname(self):
'alias for get_name'
return self.get_name()
def get_fontstyle(self):
'alias for get_style'
return self.get_style()
def get_fontsize(self):
'alias for get_size'
return self.get_size()
def get_fontweight(self):
'alias for get_weight'
return self.get_weight()
def get_stretch(self):
'Get the font stretch as a string or number'
return self._fontproperties.get_stretch()
def get_fontstretch(self):
'alias for get_stretch'
return self.get_stretch()
def get_ha(self):
'alias for get_horizontalalignment'
return self.get_horizontalalignment()
def get_horizontalalignment(self):
"""
Return the horizontal alignment as string. Will be one of
'left', 'center' or 'right'.
"""
return self._horizontalalignment
def get_position(self):
"Return the position of the text as a tuple (*x*, *y*)"
x = float(self.convert_xunits(self._x))
y = float(self.convert_yunits(self._y))
return x, y
def get_prop_tup(self):
"""
Return a hashable tuple of properties.
Not intended to be human readable, but useful for backends who
want to cache derived information about text (eg layouts) and
need to know if the text has changed.
"""
x, y = self.get_position()
return (x, y, self._text, self._color,
self._verticalalignment, self._horizontalalignment,
hash(self._fontproperties), self._rotation,
self.figure.dpi, id(self._renderer),
)
def get_text(self):
"Get the text as string"
return self._text
def get_va(self):
'alias for :meth:`getverticalalignment`'
return self.get_verticalalignment()
def get_verticalalignment(self):
"""
Return the vertical alignment as string. Will be one of
'top', 'center', 'bottom' or 'baseline'.
"""
return self._verticalalignment
def get_window_extent(self, renderer=None, dpi=None):
'''
Return a :class:`~matplotlib.transforms.Bbox` object bounding
the text, in display units.
In addition to being used internally, this is useful for
specifying clickable regions in a png file on a web page.
*renderer* defaults to the _renderer attribute of the text
object. This is not assigned until the first execution of
:meth:`draw`, so you must use this kwarg if you want
to call :meth:`get_window_extent` prior to the first
:meth:`draw`. For getting web page regions, it is
simpler to call the method after saving the figure.
*dpi* defaults to self.figure.dpi; the renderer dpi is
irrelevant. For the web application, if figure.dpi is not
the value used when saving the figure, then the value that
was used must be specified as the *dpi* argument.
'''
#return _unit_box
if not self.get_visible(): return Bbox.unit()
if dpi is not None:
dpi_orig = self.figure.dpi
self.figure.dpi = dpi
if self._text == '':
tx, ty = self._get_xy_display()
return Bbox.from_bounds(tx,ty,0,0)
if renderer is not None:
self._renderer = renderer
if self._renderer is None:
raise RuntimeError('Cannot get window extent w/o renderer')
bbox, info = self._get_layout(self._renderer)
x, y = self.get_position()
x, y = self.get_transform().transform_point((x, y))
bbox = bbox.translated(x, y)
if dpi is not None:
self.figure.dpi = dpi_orig
return bbox
def set_backgroundcolor(self, color):
"""
Set the background color of the text by updating the bbox.
.. seealso::
:meth:`set_bbox`
ACCEPTS: any matplotlib color
"""
if self._bbox is None:
self._bbox = dict(facecolor=color, edgecolor=color)
else:
self._bbox.update(dict(facecolor=color))
def set_color(self, color):
"""
Set the foreground color of the text
ACCEPTS: any matplotlib color
"""
# Make sure it is hashable, or get_prop_tup will fail.
try:
hash(color)
except TypeError:
color = tuple(color)
self._color = color
def set_ha(self, align):
'alias for set_horizontalalignment'
self.set_horizontalalignment(align)
def set_horizontalalignment(self, align):
"""
Set the horizontal alignment to one of
ACCEPTS: [ 'center' | 'right' | 'left' ]
"""
legal = ('center', 'right', 'left')
if align not in legal:
raise ValueError('Horizontal alignment must be one of %s' % str(legal))
self._horizontalalignment = align
def set_ma(self, align):
'alias for set_verticalalignment'
self.set_multialignment(align)
def set_multialignment(self, align):
"""
Set the alignment for multiple lines layout. The layout of the
bounding box of all the lines is determined bu the horizontalalignment
and verticalalignment properties, but the multiline text within that
box can be
ACCEPTS: ['left' | 'right' | 'center' ]
"""
legal = ('center', 'right', 'left')
if align not in legal:
raise ValueError('Horizontal alignment must be one of %s' % str(legal))
self._multialignment = align
def set_linespacing(self, spacing):
"""
Set the line spacing as a multiple of the font size.
Default is 1.2.
ACCEPTS: float (multiple of font size)
"""
self._linespacing = spacing
def set_family(self, fontname):
"""
Set the font family. May be either a single string, or a list
of strings in decreasing priority. Each string may be either
a real font name or a generic font class name. If the latter,
the specific font names will be looked up in the
:file:`matplotlibrc` file.
ACCEPTS: [ FONTNAME | 'serif' | 'sans-serif' | 'cursive' | 'fantasy' | 'monospace' ]
"""
self._fontproperties.set_family(fontname)
def set_variant(self, variant):
"""
Set the font variant, either 'normal' or 'small-caps'.
ACCEPTS: [ 'normal' | 'small-caps' ]
"""
self._fontproperties.set_variant(variant)
def set_fontvariant(self, variant):
'alias for set_variant'
return self.set_variant(variant)
def set_name(self, fontname):
"""alias for set_family"""
return self.set_family(fontname)
def set_fontname(self, fontname):
"""alias for set_family"""
self.set_family(fontname)
def set_style(self, fontstyle):
"""
Set the font style.
ACCEPTS: [ 'normal' | 'italic' | 'oblique']
"""
self._fontproperties.set_style(fontstyle)
def set_fontstyle(self, fontstyle):
'alias for set_style'
return self.set_style(fontstyle)
def set_size(self, fontsize):
"""
Set the font size. May be either a size string, relative to
the default font size, or an absolute font size in points.
ACCEPTS: [ size in points | 'xx-small' | 'x-small' | 'small' | 'medium' | 'large' | 'x-large' | 'xx-large' ]
"""
self._fontproperties.set_size(fontsize)
def set_fontsize(self, fontsize):
'alias for set_size'
return self.set_size(fontsize)
def set_weight(self, weight):
"""
Set the font weight.
ACCEPTS: [ a numeric value in range 0-1000 | 'ultralight' | 'light' | 'normal' | 'regular' | 'book' | 'medium' | 'roman' | 'semibold' | 'demibold' | 'demi' | 'bold' | 'heavy' | 'extra bold' | 'black' ]
"""
self._fontproperties.set_weight(weight)
def set_fontweight(self, weight):
'alias for set_weight'
return self.set_weight(weight)
def set_stretch(self, stretch):
"""
Set the font stretch (horizontal condensation or expansion).
ACCEPTS: [ a numeric value in range 0-1000 | 'ultra-condensed' | 'extra-condensed' | 'condensed' | 'semi-condensed' | 'normal' | 'semi-expanded' | 'expanded' | 'extra-expanded' | 'ultra-expanded' ]
"""
self._fontproperties.set_stretch(stretch)
def set_fontstretch(self, stretch):
'alias for set_stretch'
return self.set_stretch(stretch)
def set_position(self, xy):
"""
Set the (*x*, *y*) position of the text
ACCEPTS: (x,y)
"""
self.set_x(xy[0])
self.set_y(xy[1])
def set_x(self, x):
"""
Set the *x* position of the text
ACCEPTS: float
"""
self._x = x
def set_y(self, y):
"""
Set the *y* position of the text
ACCEPTS: float
"""
self._y = y
def set_rotation(self, s):
"""
Set the rotation of the text
ACCEPTS: [ angle in degrees | 'vertical' | 'horizontal' ]
"""
self._rotation = s
def set_va(self, align):
'alias for set_verticalalignment'
self.set_verticalalignment(align)
def set_verticalalignment(self, align):
"""
Set the vertical alignment
ACCEPTS: [ 'center' | 'top' | 'bottom' | 'baseline' ]
"""
legal = ('top', 'bottom', 'center', 'baseline')
if align not in legal:
raise ValueError('Vertical alignment must be one of %s' % str(legal))
self._verticalalignment = align
def set_text(self, s):
"""
Set the text string *s*
It may contain newlines (``\\n``) or math in LaTeX syntax.
ACCEPTS: string or anything printable with '%s' conversion.
"""
self._text = '%s' % (s,)
def is_math_text(self, s):
"""
Returns True if the given string *s* contains any mathtext.
"""
# Did we find an even number of non-escaped dollar signs?
# If so, treat is as math text.
dollar_count = s.count(r'$') - s.count(r'\$')
even_dollars = (dollar_count > 0 and dollar_count % 2 == 0)
if rcParams['text.usetex']:
return s, 'TeX'
if even_dollars:
return s, True
else:
return s.replace(r'\$', '$'), False
def set_fontproperties(self, fp):
"""
Set the font properties that control the text. *fp* must be a
:class:`matplotlib.font_manager.FontProperties` object.
ACCEPTS: a :class:`matplotlib.font_manager.FontProperties` instance
"""
if is_string_like(fp):
fp = FontProperties(fp)
self._fontproperties = fp.copy()
def set_font_properties(self, fp):
'alias for set_fontproperties'
self.set_fontproperties(fp)
artist.kwdocd['Text'] = artist.kwdoc(Text)
Text.__init__.im_func.__doc__ = cbook.dedent(Text.__init__.__doc__) % artist.kwdocd
class TextWithDash(Text):
"""
This is basically a :class:`~matplotlib.text.Text` with a dash
(drawn with a :class:`~matplotlib.lines.Line2D`) before/after
it. It is intended to be a drop-in replacement for
:class:`~matplotlib.text.Text`, and should behave identically to
it when *dashlength* = 0.0.
The dash always comes between the point specified by
:meth:`~matplotlib.text.Text.set_position` and the text. When a
dash exists, the text alignment arguments (*horizontalalignment*,
*verticalalignment*) are ignored.
*dashlength* is the length of the dash in canvas units.
(default = 0.0).
*dashdirection* is one of 0 or 1, where 0 draws the dash after the
text and 1 before. (default = 0).
*dashrotation* specifies the rotation of the dash, and should
generally stay *None*. In this case
:meth:`~matplotlib.text.TextWithDash.get_dashrotation` returns
:meth:`~matplotlib.text.Text.get_rotation`. (I.e., the dash takes
its rotation from the text's rotation). Because the text center is
projected onto the dash, major deviations in the rotation cause
what may be considered visually unappealing results.
(default = *None*)
*dashpad* is a padding length to add (or subtract) space
between the text and the dash, in canvas units.
(default = 3)
*dashpush* "pushes" the dash and text away from the point
specified by :meth:`~matplotlib.text.Text.set_position` by the
amount in canvas units. (default = 0)
.. note::
The alignment of the two objects is based on the bounding box
of the :class:`~matplotlib.text.Text`, as obtained by
:meth:`~matplotlib.artist.Artist.get_window_extent`. This, in
turn, appears to depend on the font metrics as given by the
rendering backend. Hence the quality of the "centering" of the
label text with respect to the dash varies depending on the
backend used.
.. note::
I'm not sure that I got the
:meth:`~matplotlib.text.TextWithDash.get_window_extent` right,
or whether that's sufficient for providing the object bounding
box.
"""
__name__ = 'textwithdash'
def __str__(self):
return "TextWithDash(%g,%g,%s)"%(self._x,self._y,repr(self._text))
def __init__(self,
x=0, y=0, text='',
color=None, # defaults to rc params
verticalalignment='center',
horizontalalignment='center',
multialignment=None,
fontproperties=None, # defaults to FontProperties()
rotation=None,
linespacing=None,
dashlength=0.0,
dashdirection=0,
dashrotation=None,
dashpad=3,
dashpush=0,
):
Text.__init__(self, x=x, y=y, text=text, color=color,
verticalalignment=verticalalignment,
horizontalalignment=horizontalalignment,
multialignment=multialignment,
fontproperties=fontproperties,
rotation=rotation,
linespacing=linespacing)
# The position (x,y) values for text and dashline
# are bogus as given in the instantiation; they will
# be set correctly by update_coords() in draw()
self.dashline = Line2D(xdata=(x, x),
ydata=(y, y),
color='k',
linestyle='-')
self._dashx = float(x)
self._dashy = float(y)
self._dashlength = dashlength
self._dashdirection = dashdirection
self._dashrotation = dashrotation
self._dashpad = dashpad
self._dashpush = dashpush
#self.set_bbox(dict(pad=0))
def get_position(self):
"Return the position of the text as a tuple (*x*, *y*)"
x = float(self.convert_xunits(self._dashx))
y = float(self.convert_yunits(self._dashy))
return x, y
def get_prop_tup(self):
"""
Return a hashable tuple of properties.
Not intended to be human readable, but useful for backends who
want to cache derived information about text (eg layouts) and
need to know if the text has changed.
"""
props = [p for p in Text.get_prop_tup(self)]
props.extend([self._x, self._y, self._dashlength, self._dashdirection, self._dashrotation, self._dashpad, self._dashpush])
return tuple(props)
def draw(self, renderer):
"""
Draw the :class:`TextWithDash` object to the given *renderer*.
"""
self.update_coords(renderer)
Text.draw(self, renderer)
if self.get_dashlength() > 0.0:
self.dashline.draw(renderer)
def update_coords(self, renderer):
"""
Computes the actual *x*, *y* coordinates for text based on the
input *x*, *y* and the *dashlength*. Since the rotation is
with respect to the actual canvas's coordinates we need to map
back and forth.
"""
dashx, dashy = self.get_position()
dashlength = self.get_dashlength()
# Shortcircuit this process if we don't have a dash
if dashlength == 0.0:
self._x, self._y = dashx, dashy
return
dashrotation = self.get_dashrotation()
dashdirection = self.get_dashdirection()
dashpad = self.get_dashpad()
dashpush = self.get_dashpush()
angle = get_rotation(dashrotation)
theta = np.pi*(angle/180.0+dashdirection-1)
cos_theta, sin_theta = np.cos(theta), np.sin(theta)
transform = self.get_transform()
# Compute the dash end points
# The 'c' prefix is for canvas coordinates
cxy = transform.transform_point((dashx, dashy))
cd = np.array([cos_theta, sin_theta])
c1 = cxy+dashpush*cd
c2 = cxy+(dashpush+dashlength)*cd
inverse = transform.inverted()
(x1, y1) = inverse.transform_point(tuple(c1))
(x2, y2) = inverse.transform_point(tuple(c2))
self.dashline.set_data((x1, x2), (y1, y2))
# We now need to extend this vector out to
# the center of the text area.
# The basic problem here is that we're "rotating"
# two separate objects but want it to appear as
# if they're rotated together.
# This is made non-trivial because of the
# interaction between text rotation and alignment -
# text alignment is based on the bbox after rotation.
# We reset/force both alignments to 'center'
# so we can do something relatively reasonable.
# There's probably a better way to do this by
# embedding all this in the object's transformations,
# but I don't grok the transformation stuff
# well enough yet.
we = Text.get_window_extent(self, renderer=renderer)
w, h = we.width, we.height
# Watch for zeros
if sin_theta == 0.0:
dx = w
dy = 0.0
elif cos_theta == 0.0:
dx = 0.0
dy = h
else:
tan_theta = sin_theta/cos_theta
dx = w
dy = w*tan_theta
if dy > h or dy < -h:
dy = h
dx = h/tan_theta
cwd = np.array([dx, dy])/2
cwd *= 1+dashpad/np.sqrt(np.dot(cwd,cwd))
cw = c2+(dashdirection*2-1)*cwd
newx, newy = inverse.transform_point(tuple(cw))
self._x, self._y = newx, newy
# Now set the window extent
# I'm not at all sure this is the right way to do this.
we = Text.get_window_extent(self, renderer=renderer)
self._twd_window_extent = we.frozen()
self._twd_window_extent.update_from_data_xy(np.array([c1]), False)
# Finally, make text align center
Text.set_horizontalalignment(self, 'center')
Text.set_verticalalignment(self, 'center')
def get_window_extent(self, renderer=None):
'''
Return a :class:`~matplotlib.transforms.Bbox` object bounding
the text, in display units.
In addition to being used internally, this is useful for
specifying clickable regions in a png file on a web page.
*renderer* defaults to the _renderer attribute of the text
object. This is not assigned until the first execution of
:meth:`draw`, so you must use this kwarg if you want
to call :meth:`get_window_extent` prior to the first
:meth:`draw`. For getting web page regions, it is
simpler to call the method after saving the figure.
'''
self.update_coords(renderer)
if self.get_dashlength() == 0.0:
return Text.get_window_extent(self, renderer=renderer)
else:
return self._twd_window_extent
def get_dashlength(self):
"""
Get the length of the dash.
"""
return self._dashlength
def set_dashlength(self, dl):
"""
Set the length of the dash.
ACCEPTS: float (canvas units)
"""
self._dashlength = dl
def get_dashdirection(self):
"""
Get the direction dash. 1 is before the text and 0 is after.
"""
return self._dashdirection
def set_dashdirection(self, dd):
"""
Set the direction of the dash following the text.
1 is before the text and 0 is after. The default
is 0, which is what you'd want for the typical
case of ticks below and on the left of the figure.
ACCEPTS: int (1 is before, 0 is after)
"""
self._dashdirection = dd
def get_dashrotation(self):
"""
Get the rotation of the dash in degrees.
"""
if self._dashrotation == None:
return self.get_rotation()
else:
return self._dashrotation
def set_dashrotation(self, dr):
"""
Set the rotation of the dash, in degrees
ACCEPTS: float (degrees)
"""
self._dashrotation = dr
def get_dashpad(self):
"""
Get the extra spacing between the dash and the text, in canvas units.
"""
return self._dashpad
def set_dashpad(self, dp):
"""
Set the "pad" of the TextWithDash, which is the extra spacing
between the dash and the text, in canvas units.
ACCEPTS: float (canvas units)
"""
self._dashpad = dp
def get_dashpush(self):
"""
Get the extra spacing between the dash and the specified text
position, in canvas units.
"""
return self._dashpush
def set_dashpush(self, dp):
"""
Set the "push" of the TextWithDash, which
is the extra spacing between the beginning
of the dash and the specified position.
ACCEPTS: float (canvas units)
"""
self._dashpush = dp
def set_position(self, xy):
"""
Set the (*x*, *y*) position of the :class:`TextWithDash`.
ACCEPTS: (x, y)
"""
self.set_x(xy[0])
self.set_y(xy[1])
def set_x(self, x):
"""
Set the *x* position of the :class:`TextWithDash`.
ACCEPTS: float
"""
self._dashx = float(x)
def set_y(self, y):
"""
Set the *y* position of the :class:`TextWithDash`.
ACCEPTS: float
"""
self._dashy = float(y)
def set_transform(self, t):
"""
Set the :class:`matplotlib.transforms.Transform` instance used
by this artist.
ACCEPTS: a :class:`matplotlib.transforms.Transform` instance
"""
Text.set_transform(self, t)
self.dashline.set_transform(t)
def get_figure(self):
'return the figure instance the artist belongs to'
return self.figure
def set_figure(self, fig):
"""
Set the figure instance the artist belong to.
ACCEPTS: a :class:`matplotlib.figure.Figure` instance
"""
Text.set_figure(self, fig)
self.dashline.set_figure(fig)
artist.kwdocd['TextWithDash'] = artist.kwdoc(TextWithDash)
class Annotation(Text):
"""
A :class:`~matplotlib.text.Text` class to make annotating things
in the figure, such as :class:`~matplotlib.figure.Figure`,
:class:`~matplotlib.axes.Axes`,
:class:`~matplotlib.patches.Rectangle`, etc., easier.
"""
def __str__(self):
return "Annotation(%g,%g,%s)"%(self.xy[0],self.xy[1],repr(self._text))
def __init__(self, s, xy,
xytext=None,
xycoords='data',
textcoords=None,
arrowprops=None,
**kwargs):
"""
Annotate the *x*, *y* point *xy* with text *s* at *x*, *y*
location *xytext*. (If *xytext* = *None*, defaults to *xy*,
and if *textcoords* = *None*, defaults to *xycoords*).
*arrowprops*, if not *None*, is a dictionary of line properties
(see :class:`matplotlib.lines.Line2D`) for the arrow that connects
annotation to the point.
If the dictionary has a key *arrowstyle*, a FancyArrowPatch
instance is created with the given dictionary and is
drawn. Otherwise, a YAArow patch instance is created and
drawn. Valid keys for YAArow are
========= =============================================================
Key Description
========= =============================================================
width the width of the arrow in points
frac the fraction of the arrow length occupied by the head
headwidth the width of the base of the arrow head in points
shrink oftentimes it is convenient to have the arrowtip
and base a bit away from the text and point being
annotated. If *d* is the distance between the text and
annotated point, shrink will shorten the arrow so the tip
and base are shink percent of the distance *d* away from the
endpoints. ie, ``shrink=0.05 is 5%%``
? any key for :class:`matplotlib.patches.polygon`
========= =============================================================
Valid keys for FancyArrowPatch are
=============== ======================================================
Key Description
=============== ======================================================
arrowstyle the arrow style
connectionstyle the connection style
relpos default is (0.5, 0.5)
patchA default is bounding box of the text
patchB default is None
shrinkA default is 2 points
shrinkB default is 2 points
mutation_scale default is text size (in points)
mutation_aspect default is 1.
? any key for :class:`matplotlib.patches.PathPatch`
=============== ======================================================
*xycoords* and *textcoords* are strings that indicate the
coordinates of *xy* and *xytext*.
================= ===================================================
Property Description
================= ===================================================
'figure points' points from the lower left corner of the figure
'figure pixels' pixels from the lower left corner of the figure
'figure fraction' 0,0 is lower left of figure and 1,1 is upper, right
'axes points' points from lower left corner of axes
'axes pixels' pixels from lower left corner of axes
'axes fraction' 0,1 is lower left of axes and 1,1 is upper right
'data' use the coordinate system of the object being
annotated (default)
'offset points' Specify an offset (in points) from the *xy* value
'polar' you can specify *theta*, *r* for the annotation,
even in cartesian plots. Note that if you
are using a polar axes, you do not need
to specify polar for the coordinate
system since that is the native "data" coordinate
system.
================= ===================================================
If a 'points' or 'pixels' option is specified, values will be
added to the bottom-left and if negative, values will be
subtracted from the top-right. Eg::
# 10 points to the right of the left border of the axes and
# 5 points below the top border
xy=(10,-5), xycoords='axes points'
Additional kwargs are Text properties:
%(Text)s
"""
if xytext is None:
xytext = xy
if textcoords is None:
textcoords = xycoords
# we'll draw ourself after the artist we annotate by default
x,y = self.xytext = xytext
Text.__init__(self, x, y, s, **kwargs)
self.xy = xy
self.xycoords = xycoords
self.textcoords = textcoords
self.arrowprops = arrowprops
self.arrow = None
if arrowprops and arrowprops.has_key("arrowstyle"):
self._arrow_relpos = arrowprops.pop("relpos", (0.5, 0.5))
self.arrow_patch = FancyArrowPatch((0, 0), (1,1),
**arrowprops)
else:
self.arrow_patch = None
__init__.__doc__ = cbook.dedent(__init__.__doc__) % artist.kwdocd
def contains(self,event):
t,tinfo = Text.contains(self,event)
if self.arrow is not None:
a,ainfo=self.arrow.contains(event)
t = t or a
# self.arrow_patch is currently not checked as this can be a line - JJ
return t,tinfo
def set_figure(self, fig):
if self.arrow is not None:
self.arrow.set_figure(fig)
if self.arrow_patch is not None:
self.arrow_patch.set_figure(fig)
Artist.set_figure(self, fig)
def _get_xy(self, x, y, s):
if s=='data':
trans = self.axes.transData
x = float(self.convert_xunits(x))
y = float(self.convert_yunits(y))
return trans.transform_point((x, y))
elif s=='offset points':
# convert the data point
dx, dy = self.xy
# prevent recursion
if self.xycoords == 'offset points':
return self._get_xy(dx, dy, 'data')
dx, dy = self._get_xy(dx, dy, self.xycoords)
# convert the offset
dpi = self.figure.get_dpi()
x *= dpi/72.
y *= dpi/72.
# add the offset to the data point
x += dx
y += dy
return x, y
elif s=='polar':
theta, r = x, y
x = r*np.cos(theta)
y = r*np.sin(theta)
trans = self.axes.transData
return trans.transform_point((x,y))
elif s=='figure points':
#points from the lower left corner of the figure
dpi = self.figure.dpi
l,b,w,h = self.figure.bbox.bounds
r = l+w
t = b+h
x *= dpi/72.
y *= dpi/72.
if x<0:
x = r + x
if y<0:
y = t + y
return x,y
elif s=='figure pixels':
#pixels from the lower left corner of the figure
l,b,w,h = self.figure.bbox.bounds
r = l+w
t = b+h
if x<0:
x = r + x
if y<0:
y = t + y
return x, y
elif s=='figure fraction':
#(0,0) is lower left, (1,1) is upper right of figure
trans = self.figure.transFigure
return trans.transform_point((x,y))
elif s=='axes points':
#points from the lower left corner of the axes
dpi = self.figure.dpi
l,b,w,h = self.axes.bbox.bounds
r = l+w
t = b+h
if x<0:
x = r + x*dpi/72.
else:
x = l + x*dpi/72.
if y<0:
y = t + y*dpi/72.
else:
y = b + y*dpi/72.
return x, y
elif s=='axes pixels':
#pixels from the lower left corner of the axes
l,b,w,h = self.axes.bbox.bounds
r = l+w
t = b+h
if x<0:
x = r + x
else:
x = l + x
if y<0:
y = t + y
else:
y = b + y
return x, y
elif s=='axes fraction':
#(0,0) is lower left, (1,1) is upper right of axes
trans = self.axes.transAxes
return trans.transform_point((x, y))
def update_positions(self, renderer):
x, y = self.xytext
self._x, self._y = self._get_xy(x, y, self.textcoords)
x, y = self.xy
x, y = self._get_xy(x, y, self.xycoords)
ox0, oy0 = self._x, self._y
ox1, oy1 = x, y
if self.arrowprops:
x0, y0 = x, y
l,b,w,h = self.get_window_extent(renderer).bounds
r = l+w
t = b+h
xc = 0.5*(l+r)
yc = 0.5*(b+t)
d = self.arrowprops.copy()
# Use FancyArrowPatch if self.arrowprops has "arrowstyle" key.
# Otherwise, fallback to YAArrow.
#if d.has_key("arrowstyle"):
if self.arrow_patch:
# adjust the starting point of the arrow relative to
# the textbox.
# TODO : Rotation needs to be accounted.
relpos = self._arrow_relpos
bbox = self.get_window_extent(renderer)
ox0 = bbox.x0 + bbox.width * relpos[0]
oy0 = bbox.y0 + bbox.height * relpos[1]
# The arrow will be drawn from (ox0, oy0) to (ox1,
# oy1). It will be first clipped by patchA and patchB.
# Then it will be shrinked by shirnkA and shrinkB
# (in points). If patch A is not set, self.bbox_patch
# is used.
self.arrow_patch.set_positions((ox0, oy0), (ox1,oy1))
mutation_scale = d.pop("mutation_scale", self.get_size())
mutation_scale = renderer.points_to_pixels(mutation_scale)
self.arrow_patch.set_mutation_scale(mutation_scale)
if self._bbox_patch:
patchA = d.pop("patchA", self._bbox_patch)
self.arrow_patch.set_patchA(patchA)
else:
patchA = d.pop("patchA", self._bbox)
self.arrow_patch.set_patchA(patchA)
else:
# pick the x,y corner of the text bbox closest to point
# annotated
dsu = [(abs(val-x0), val) for val in l, r, xc]
dsu.sort()
_, x = dsu[0]
dsu = [(abs(val-y0), val) for val in b, t, yc]
dsu.sort()
_, y = dsu[0]
shrink = d.pop('shrink', 0.0)
theta = math.atan2(y-y0, x-x0)
r = math.sqrt((y-y0)**2. + (x-x0)**2.)
dx = shrink*r*math.cos(theta)
dy = shrink*r*math.sin(theta)
width = d.pop('width', 4)
headwidth = d.pop('headwidth', 12)
frac = d.pop('frac', 0.1)
self.arrow = YAArrow(self.figure, (x0+dx,y0+dy), (x-dx, y-dy),
width=width, headwidth=headwidth, frac=frac,
**d)
self.arrow.set_clip_box(self.get_clip_box())
def draw(self, renderer):
"""
Draw the :class:`Annotation` object to the given *renderer*.
"""
self.update_positions(renderer)
self.update_bbox_position_size(renderer)
if self.arrow is not None:
if self.arrow.figure is None and self.figure is not None:
self.arrow.figure = self.figure
self.arrow.draw(renderer)
if self.arrow_patch is not None:
if self.arrow_patch.figure is None and self.figure is not None:
self.arrow_patch.figure = self.figure
self.arrow_patch.draw(renderer)
Text.draw(self, renderer)
artist.kwdocd['Annotation'] = Annotation.__init__.__doc__
| agpl-3.0 |
sujithvm/internationality-journals | src/SNIPvsourSNIPv2.py | 3 | 2366 | __author__ = 'Sukrit'
import pandas as pd
import csv
f = open('../output/both_journal_list.txt', 'r') #reading list of journals present in Aminer and Elesevier
x = f.readlines()
f.close()
bothjs = []
for line in x:
bothjs.append(line.rstrip()) # list of common journals, removing '\n'
# OUR SNIP
our_SNIP = pd.read_csv('../output/SNIP_all_journals.csv',usecols=[1,3]) # reading SNIP values calculated by US
our_SNIP.sort_values('Jname',inplace = True) #sorting alphabetically
our_SNIP = our_SNIP[ our_SNIP['SNIP'] != 0 ]
our_SNIP2 = pd.DataFrame()
our_SNIP2 = our_SNIP2.append(our_SNIP,ignore_index = True) #resetting index value
our_SNIP = our_SNIP2
oSNIPnames = our_SNIP['Jname']
#print our_SNIP
# SNIP
SNIP = pd.read_csv("../data/journal_SNIP_values.csv")
journals = SNIP['Source Title'] #taking only 'Source Title' column
SNIPlist = []
'''
for jname in oSNIPnames :
for name in journals :
if jname == name :
SNIPlist.append(jname)
'''
SNIP_2 = pd.DataFrame()
#now we corroborate SNIP values.
for name in oSNIPnames :
SNIP_2 = SNIP_2.append(SNIP[ SNIP['Source Title'] == name ],ignore_index = True) # copy all SNIP/IPP values for which we have citations in Aminer
#print SNIP_2
SNIP_2010 = SNIP_2[['Source Title','2010 SNIP']].copy() # copying 'Source Title' and '2010 SNIP' columns to new df
SNIP_2010 = SNIP_2010.fillna(0) # replacing NaN values with 0
#print SNIP_2010
#print our_SNIP
xarray = []
xarray2 = []
yarray = []
yarray2 = []
names = []
for name in oSNIPnames :
a = our_SNIP['SNIP'][ our_SNIP['Jname']== name ].values
b = SNIP_2010['2010 SNIP'][ SNIP_2010['Source Title'] == name ].values
#if ( a != 0 and a <10 and b != 0 and b < 10 and a > 0.01 ) :
if ( a != 0 and b!= 0 ) :
xarray.append(our_SNIP['SNIP'][our_SNIP['Jname']== name ].values)
yarray.append(SNIP_2010['2010 SNIP'][SNIP_2010['Source Title'] == name ].values)
names.append(name)
yarray = [float(i) for i in yarray]
for item in yarray :
yarray2.append(item)
xarray = [float(i) for i in xarray]
for item in xarray :
xarray2.append(item)
print len(xarray)
print len(yarray)
print len(names)
print "\n\n"
data = [names,xarray,yarray]
with open('../data/SNIP_ourSNIP_ALL.csv','wb') as f:
out = csv.writer(f, delimiter=',',quoting=csv.QUOTE_ALL)
out.writerows(zip(*data)) | mit |
willu47/pyrate | pyrate/utils.py | 1 | 9677 | import datetime
from geographiclib.geodesic import Geodesic
from geopy.distance import distance
def valid_mmsi(mmsi):
"""Checks if a given MMSI number is valid.
Arguments
---------
mmsi : int
An MMSI number
Returns
-------
Returns True if the MMSI number is 9 digits long.
"""
return not mmsi is None and len(str(int(mmsi))) == 9
VALID_MESSAGE_IDS = range(1, 28)
def valid_message_id(message_id):
return message_id in VALID_MESSAGE_IDS
VALID_NAVIGATIONAL_STATUSES = set([0, 1, 2, 3, 4, 5, 6, 7, 8, 11, 12, 14, 15])
def valid_navigational_status(status):
return status in VALID_NAVIGATIONAL_STATUSES
def valid_longitude(lon):
"""Check valid longitude.
Arguments
---------
lon : integer
A longitude
Returns
-------
True if the longitude is valid
"""
return lon != None and lon >= -180 and lon <= 180
def valid_latitude(lat):
"""Check valid latitude.
Arguments
---------
lon : integer
A latitude
Returns
-------
True if the latitude is valid
"""
return lat != None and lat >= -90 and lat <= 90
def valid_imo(imo=0):
"""Check valid IMO using checksum.
Arguments
---------
imo : integer
An IMO ship identifier
Returns
-------
True if the IMO number is valid
Notes
-----
Taken from Eoin O'Keeffe's `checksum_valid` function in pyAIS
"""
try:
str_imo = str(int(imo))
if len(str_imo) != 7:
return False
sum_val = 0
for ii, chk in enumerate(range(7, 1, -1)):
sum_val += chk * int(str_imo[ii])
if str_imo[6] == str(sum_val)[len(str(sum_val))-1]:
return True
except:
return False
return False
def is_valid_sog(sog):
"""Validates speed over ground
Arguments
---------
sog : float
Speed over ground
Returns
-------
True if speed over ground is greater than zero and less than 102.2
"""
return sog >= 0 and sog <= 102.2
def is_valid_cog(cog):
"""Validates course over ground
Arguments
---------
cog : float
Course over ground
Returns
-------
True if course over ground is greater than zero and less than 360 degrees
"""
return cog >= 0 and cog < 360
def is_valid_heading(heading):
"""Validates heading
Arguments
---------
heading : float
The heading of the ship in degrees
Returns
-------
True if heading is greater than zero and less than 360 degrees
"""
try:
return (heading >= 0 and heading < 360) or heading == 511
except:
#getting errors on none type for heading
return False
def speed_calc(msg_stream, index1, index2):
"""Computes the speed between two messages in the message stream
Parameters
----------
msg_stream :
A list of dictionaries representing AIS messages for a single MMSI
number. Dictionary keys correspond to the column names from the
`ais_clean` table. The list of messages should be ordered by
timestamp in ascending order.
index1 : int
The index of the first message
index2 : int
The index of the second message
Returns
-------
timediff : datetime
The difference in time between the two messages in datetime
dist : float
The distance between messages in nautical miles
speed : float
The speed in knots
"""
timediff = abs(msg_stream[index2]['Time'] - msg_stream[index1]['Time'])
try:
dist = distance((msg_stream[index1]['Latitude'], msg_stream[index1]['Longitude']),\
(msg_stream[index2]['Latitude'], msg_stream[index2]['Longitude'])).m
except ValueError:
dist = Geodesic.WGS84.Inverse(msg_stream[index1]['Latitude'], msg_stream[index1]['Longitude'],\
msg_stream[index2]['Latitude'], msg_stream[index2]['Longitude'])['s12'] #in metres
if timediff > datetime.timedelta(0):
convert_metres_to_nautical_miles = 0.0005399568
speed = (dist * convert_metres_to_nautical_miles) / (timediff.days * 24 + timediff.seconds / 3600)
else:
speed = 102.2
return timediff, dist, speed
def detect_location_outliers(msg_stream, as_df=False):
"""Detects outlier messages by submitting messages to a speed test
The algorithm proceeds as follows:
1) Create a linked list of all messages with non-null locations
(pointing to next message)
2) Loop through linked list and check for location outliers:
* A location outlier is who does not pass the speed test (<= 50kn;
link is 'discarded' when not reached in time)
* No speed test is performed when
- distance too small (< 0.054nm ~ 100m; catches most positioning
inaccuracies)
=> no outlier
- time gap too big (>= 215h ~ 9d; time it takes to get anywhere on
the globe at 50kn not respecting land)
=> next message is new 'start'
If an alledged outlier is found its link is set to be the current
message's link
3) The start of a linked list becomes special attention: if speed check
fails, the subsequent link is tested
Line of thinking is: Can I get to the next message in time? If not 'next'
must be an outlier, go to next but one.
Parameters
----------
msg_stream :
A list of dictionaries representing AIS messages for a single MMSI
number. Dictionary keys correspond to the column names from the
`ais_clean` table. The list of messages should be ordered by
timestamp in ascending order.
as_df : bool, optional
Set to True if `msg_stream` are passed as a pandas DataFrame
Returns
-------
outlier_rows :
The rows in the message stream which are outliers
"""
if as_df:
raise NotImplementedError('msg_stream cannot be DataFrame, as_df=True does not work yet.')
# 1) linked list
linked_rows = [None]*len(msg_stream)
link = None
has_location_count = 0
for row_index in reversed(range(len(msg_stream))):
if msg_stream[row_index]['Longitude'] is not None and msg_stream[row_index]['Latitude'] is not None:
linked_rows[row_index] = link
link = row_index
has_location_count = has_location_count + 1
# 2)
outlier_rows = [False] * len(msg_stream)
if has_location_count < 2: # no messages that could be outliers available
return outlier_rows
index = next((i for i, j in enumerate(linked_rows) if j), None)
at_start = True
while linked_rows[index] is not None:
# time difference, distance and speed beetween rows of index and corresponding link
timediff, dist, speed = speed_calc(msg_stream, index, linked_rows[index])
if timediff > datetime.timedelta(hours=215):
index = linked_rows[index]
at_start = True #restart
elif dist < 100:
index = linked_rows[index] #skip over gap (at_start remains same)
elif speed > 50:
if at_start is False:
#for now just skip single outliers, i.e. test current index with next but one
outlier_rows[linked_rows[index]] = True
linked_rows[index] = linked_rows[linked_rows[index]]
elif at_start is True and linked_rows[linked_rows[index]] is None: #no subsequent message
outlier_rows[index] = True
outlier_rows[linked_rows[index]] = True
index = linked_rows[index]
else: # explore first three messages A, B, C (at_start is True)
indexA = index
indexB = linked_rows[index]
indexC = linked_rows[indexB]
timediffAC, distAC, speedAC = speed_calc(msg_stream, indexA, indexC)
timediffBC, distBC, speedBC = speed_calc(msg_stream, indexB, indexC)
#if speedtest A->C ok or distance small, then B is outlier, next index is C
if timediffAC <= datetime.timedelta(hours=215) and (distAC < 100 or speedAC <= 50):
outlier_rows[indexB] = True
index = indexC
at_start = False
#elif speedtest B->C ok or distance, then A is outlier, next index is C
elif timediffBC <= datetime.timedelta(hours=215) and (distBC < 100 or speedBC <= 50):
outlier_rows[indexA] = True
index = indexC
at_start = False
#else bot not ok, dann A and B outlier, set C to new start
else:
outlier_rows[indexA] = True
outlier_rows[indexB] = True
index = indexC
at_start = True
else: #all good
index = linked_rows[index]
at_start = False
return outlier_rows
def interpolate_passages(msg_stream):
"""Interpolate far apart points in an ordered stream of messages.
Parameters
----------
msg_stream :
A list of dictionaries representing AIS messages for a single MMSI
number. Dictionary keys correspond to the column names from the
`ais_clean` table. The list of messages should be ordered by
timestamp in ascending order.
Returns
-------
artificial_messages : list
A list of artificial messages to fill in gaps/navigate around land.
"""
artificial_messages = []
return artificial_messages
| mit |
plazowicz/salib | salib/models/svm/preprocess_gop.py | 1 | 1829 | """
Dumps gop debate data into pickle, structure:
{
id: "sentences":[[first sentence],
[second, sentence]],
"label": label
}
"""
import argparse
import pickle
import pandas as pd
from __builtin__ import unicode
from nltk.tokenize.casual import TweetTokenizer
def main():
data_path = '../../data/GOP_REL_ONLY.csv'
out_path = '../../data/gop_data.pkl'
data = {}
df = pd.read_csv(data_path)
tokenizer = TweetTokenizer(strip_handles=True, reduce_len=True)
for idx, row_data in enumerate(df.iterrows()):
row = row_data[1]
data[idx] = {
"candidate": row[0],
"candidate:confidence": str(row[1]),
"sentiment": row[4],
"sentiment:confidence": float(row[5]),
"subject_matter": row[6],
"text": tokenizer.tokenize(unicode(row[14], errors='ignore')),
"tweet_location": row[18]
}
with open(out_path, 'w') as f_out:
pickle.dump(data, f_out)
def make_id_mapping(dict):
result = {}
for key, value in dict.iteritems():
result[value] = key
return result
def preprocess_imdb(imdb, imdb_data_dict):
with open(imdb, 'r') as f_in:
imdb_data = pickle.load(f_in)
with open(imdb_data_dict, 'r') as f_in:
imdb_dict = pickle.load(f_in)
data = {}
mapping = make_id_mapping(imdb_dict)
i = 0
for ids, labels in zip(imdb_data[0], imdb_data[1]):
text = []
for id in ids:
text.append(mapping[id])
data[i] = {
'sentiment': labels,
'text': text
}
i += 1
with open('../../data/imdb_data.pkl', 'w') as f_out:
pickle.dump(data, f_out)
if __name__ == '__main__':
preprocess_imdb('../lstm/imdb.pkl', '../lstm/imdb.dict.pkl')
| mit |
NeuralEnsemble/NeuroTools | examples/matlab_vs_python/smallnet_inlineC.py | 3 | 5107 | # Created by Eugene M. Izhikevich, 2003 Modified by S. Fusi 2007
# Ported to Python by Eilif Muller, 2008.
#
# Notes:
#
# Requires matplotlib,ipython,numpy>=1.0.3
# On a debian/ubuntu based system:
# $ apt-get install python-matplotlib python-numpy ipython
#
# Start ipython with threaded plotting support:
# $ ipython -pylab
#
# At the resulting prompt, run the file by:
# In [1]: execfile('smallnet.py')
# Modules required
import numpy
import numpy.random as random
# Bug fix for numpy version 1.0.4 for numpy.histogram
numpy.lib.function_base.any = numpy.any
# For inline C optimization
from scipy import weave
# For measuring performance
import time
t1 = time.time()
# Excitatory and inhibitory neuron counts
Ne = 1000
Ni = 4
N = Ne+Ni
# Synaptic couplings
Je = 250.0/Ne
Ji = 0.0
# Synaptic couplings (mV)
#S = numpy.zeros((N,N))
#S[:,:Ne] = Je*random.uniform(size=(N,Ne))
#S[:,Ne:] = -Ji*random.uniform(size=(N,Ni))
# Connectivity
#S[:,:Ne][random.uniform(size=(N,Ne))-0.9<=0.0]=0.0
#S[:,Ne:][random.uniform(size=(N,Ni))-0.9<=0.0]=0.0
# 10% Connectivity
targets = []
weights = []
# excitatory
for i in xrange(Ne):
targets.append(random.permutation(numpy.arange(N))[:random.poisson(N*0.1)])
weights.append(Je*ones_like(targets[i]))
# inhibitory
for i in xrange(Ne,N):
targets.append(random.permutation(numpy.arange(N))[:random.poisson(N*0.1)])
weights.append(-Ji*ones_like(targets[i]))
# Statistics of the background external current
#mb = 3.0; sb = 4.0
#mue = mb; sigmae=sb
#sigmai = 0.0
# State variable v, initial value of 0
v = numpy.zeros(N)
# Refractory period state variable
r = numpy.zeros(N)
# storage for intermediate calculations
I = numpy.zeros(N)
Isyn = numpy.zeros(N)
# Spike timings in a list
spikes = [[] for x in xrange(N)]
#print 'mu(nu=5Hz)=%f' % (mb+Ne*Je*.015-leak,)
#print 'mu(nu=100Hz)=%f' % (mb+Ne*Je*.1-leak,)
# total duration of the simulation (ms)
dt = 0.05
duration = 400.0
t = numpy.arange(0.0,duration,dt)
vt = numpy.zeros_like(t)
# This is inline C code
c_code = """
const double mb = 3.0;
const double sb = 4.0;
double mue = mb;
double sigmae = sb;
double sigmai = 0.0;
//double dt = 0.05; // (ms)
double leak = 5.0; // (mV/ms)
double sdt = sqrt(dt);
double reset = 0.0; //(mV)
double refr = 2.5; //(ms)
double threshold = 20.0; //(mv)
double Je = 250.0/Ne;
double Ji = 0.0;
int i,j,k;
// GSL random number generation setup
const gsl_rng_type * T_gsl;
gsl_rng * r_gsl;
gsl_rng_env_setup();
T_gsl = gsl_rng_default;
r_gsl = gsl_rng_alloc (T_gsl);
py::list l;
for(i=0;i<Nt[0];i++) {
// time for a strong external input
if (t(i)>150.0) {
mue = 6.5;
sigmae = 7.5;
}
// time to restore the initial statistics of the external input
if (t(i)>300.0) {
mue = mb;
sigmae = sb;
}
// Noise plus synaptic input from last step
for (j=0;j<Ne;j++) {
I(j)=sdt*sigmae*gsl_ran_gaussian(r_gsl,1.0)+Isyn(j);
//I(j) = 0.0;
Isyn(j)=0.0;
}
for (j=Ne;j<N;j++) {
I(j)=sdt*sigmai*gsl_ran_gaussian(r_gsl,1.0)+Isyn(j);
//I(j)=0.0;
Isyn(j)=0.0;
}
// Euler's method for each neuron
for (j=0;j<N;j++) {
if (v(j)>=threshold) {
l = py::list((PyObject*)(spikes[j]));
l.append(t(i));
for (k=0;k<targets[j].size();k++) {
Isyn(targets[j][k]) += (const double)weights[j][k];
}
v(j) = reset;
r(j) = refr;
}
if(r(j)<0) {
I(j) -= dt*(leak-mue);
v(j) +=I(j);
v(j) = v(j)>=0.0 ? v(j) : 0.0;
}
else {
r(j)-=dt;
}
}
vt(i) = v(0);
}
// Clean-up the GSL random number generator
gsl_rng_free (r_gsl);
l = py::list((PyObject*)spikes[0]);
l.append(3.0);
"""
t2 = time.time()
print 'Elapsed time is ', str(t2-t1), ' seconds.'
t1 = time.time()
weave.inline(c_code, ['v','r','t','vt','dt',
'spikes','I','Isyn','Ne','Ni','N','targets','weights'],
type_converters=weave.converters.blitz,
headers = ["<gsl/gsl_rng.h>", "<gsl/gsl_randist.h>"],
libraries = ["gsl","gslcblas"])
t2 = time.time()
print 'Elapsed time is ', str(t2-t1), ' seconds.'
def myplot():
global firings
t1 = time.time()
figure()
# Membrane potential trace of the zeroeth neuron
subplot(3,1,1)
vt[vt>=20.0]=65.0
plot(t,vt)
ylabel(r'$V-V_{rest}\ \left[\rm{mV}\right]$')
# Raster plot of the spikes of the network
subplot(3,1,2)
myfirings = array(firings)
myfirings_100 = myfirings[myfirings[:,0]<min(100,Ne)]
plot(myfirings_100[:,1],myfirings_100[:,0],'.')
axis([0, duration, 0, min(100,Ne)])
ylabel('Neuron index')
# Mean firing rate of the excitatory population as a function of time
subplot(3,1,3)
# 1 ms resultion of rate histogram
dx = 1.0
x = arange(0,duration,dx)
myfirings_Ne = myfirings[myfirings[:,0]<Ne]
mean_fe,x = numpy.histogram(myfirings_Ne[:,1],x)
plot(x,mean_fe/dx/Ne*1000.0,ls='steps')
ylabel('Hz')
xlabel('time [ms]')
t2 = time.time()
print 'Finished. Elapsed', str(t2-t1), ' seconds.'
#myplot()
| gpl-2.0 |
bmazin/SDR | Setup/PSFit_ml_diffAttens.py | 1 | 40224 | '''
Author Rupert Dodkins
A script to automate the identification of resonator attenuations normally performed by PSFit.py. This is accomplished
using Google's Tensor Flow machine learning package which implements a pattern recognition algorithm on the IQ velocity
spectrum. The code implements a 2D image classification algorithm similar to the MNIST test. This code creates a 2D
image from a 1D variable by populating a matrix of zeros with ones at the y location of each datapoint
Usage: python PSFit_ml.py 20160712/ps_r115_FL1_1_20160712-225809.h5 20160720/ps_r118_FL1_b_pos_20160720-231653.h5
Inputs:
20160712/ps_r115_FL1_1.txt: list of resonator frequencies and correct attenuations
20160712/ps_r115_FL1_1_20160712-225809.h5: corresponding powersweep file
20160720/ps_r118_FL1_b_pos_20160720-231653.h5: powersweep file the user wishes to infer attenuations for
Intermediaries:
SDR/Setup/ps_peaks_train_w<x>_s<y>.pkl: images and corresponding labels used to train the algorithm
Outputs:
20160712/ps_r115_FL1_1.pkl: frequencies, IQ velocities, Is, Qs, attenuations formatted for quick use
20160720/ps_r118_FL1_b_pos_20160720-231653-ml.txt: to be used with PSFit.py (temporary)
20160720/ps_r118_FL1_b_pos.txt: final list of frequencies and attenuations
How it works:
For each resonator and attenuation the script first assesses if the IQ loop appears saturated. If the unstaurated IQ
velocity spectrum for that attenuation is compared with the pattern recognition machine. A list of attenuations for each
resonator, where the loop is not saturated and the IQ velocity peak looks the correct shape, and the attenuation value
is chosen which has the highest 2nd largest IQ velocity. This identifier was chosen because the optimum attenuation
value has a high max IQ velocity and a low ratio of max IQ velocity to 2nd max IQ velocity which is equivalent to
choosing the highest 2nd max IQ velocity.
This list of attenuation values and frequencies are either fed PSFit.py to checked manually or dumped to
ps_r118_FL1_b_pos.txt
The machine learning algorithm requires a series of images to train and test the algorithm with. If they exist the image
data will be loaded from a train pkl file
Alternatively, if the user does not have a train pkl file but does have a powersweep file and corresponding list of
resonator attenuations this should be used as the initial file and training data will be made. The 3 classes are an
overpowered peak (saturated), peak with the correct amount of power, or an underpowered peak.
These new image data will be saved as pkl files (or appened to existing pkl files) and reloaded
The machine is then trained and its ability to predict the type of image is validated
The weights used to make predictions for each class can be displayed using the plotWeights function
to do:
change training txt file freq comparison function so its able to match all frequencies
'''
import os,sys,inspect
from PSFit import *
from lib.iqsweep import *
import numpy as np
import sys, os
import matplotlib.pyplot as plt
import tensorflow as tf
import pickle
import random
import time
import math
from scipy import interpolate
#removes visible depreciation warnings from lib.iqsweep
import warnings
warnings.filterwarnings("ignore")
class PSFitting():
'''Class has been lifted from PSFit.py and modified to incorporate the machine learning algorithms from
WideAna_ml.py
'''
def __init__(self, initialFile=None):
self.initialFile = initialFile
self.resnum = 0
def loadres(self):
'''
Outputs
Freqs: the span of frequencies for a given resonator
iq_vels: the IQ velocities for all attenuations for a given resonator
Is: the I component of the frequencies for a given resonator
Qs: the Q component of the frequencies for a given resonator
attens: the span of attenuations. The same for all resonators
'''
self.Res1=IQsweep()
self.Res1.LoadPowers(self.initialFile, 'r0', self.freq[self.resnum])
self.resfreq = self.freq[self.resnum]
self.NAttens = len(self.Res1.atten1s)
self.res1_iq_vels=numpy.zeros((self.NAttens,self.Res1.fsteps-1))
self.res1_iq_amps=numpy.zeros((self.NAttens,self.Res1.fsteps))
for iAtt in range(self.NAttens):
for i in range(1,self.Res1.fsteps-1):
self.res1_iq_vels[iAtt,i]=sqrt((self.Res1.Qs[iAtt][i]-self.Res1.Qs[iAtt][i-1])**2+(self.Res1.Is[iAtt][i]-self.Res1.Is[iAtt][i-1])**2)
self.res1_iq_amps[iAtt,:]=sqrt((self.Res1.Qs[iAtt])**2+(self.Res1.Is[iAtt])**2)
#Sort the IQ velocities for each attenuation, to pick out the maximums
sorted_vels = numpy.sort(self.res1_iq_vels,axis=1)
#Last column is maximum values for each atten (row)
self.res1_max_vels = sorted_vels[:,-1]
#Second to last column has second highest value
self.res1_max2_vels = sorted_vels[:,-2]
#Also get indices for maximum of each atten, and second highest
sort_indices = numpy.argsort(self.res1_iq_vels,axis=1)
max_indices = sort_indices[:,-1]
max2_indices = sort_indices[:,-2]
max_neighbor = max_indices.copy()
#for each attenuation find the ratio of the maximum velocity to the second highest velocity
self.res1_max_ratio = self.res1_max_vels.copy()
max_neighbors = zeros(self.NAttens)
max2_neighbors = zeros(self.NAttens)
self.res1_max2_ratio = self.res1_max2_vels.copy()
for iAtt in range(self.NAttens):
if max_indices[iAtt] == 0:
max_neighbor = self.res1_iq_vels[iAtt,max_indices[iAtt]+1]
elif max_indices[iAtt] == len(self.res1_iq_vels[iAtt,:])-1:
max_neighbor = self.res1_iq_vels[iAtt,max_indices[iAtt]-1]
else:
max_neighbor = maximum(self.res1_iq_vels[iAtt,max_indices[iAtt]-1],
self.res1_iq_vels[iAtt,max_indices[iAtt]+1])
max_neighbors[iAtt]=max_neighbor
self.res1_max_ratio[iAtt] = self.res1_max_vels[iAtt]/max_neighbor
if max2_indices[iAtt] == 0:
max2_neighbor = self.res1_iq_vels[iAtt,max2_indices[iAtt]+1]
elif max2_indices[iAtt] == len(self.res1_iq_vels[iAtt,:])-1:
max2_neighbor = self.res1_iq_vels[iAtt,max2_indices[iAtt]-1]
else:
max2_neighbor = maximum(self.res1_iq_vels[iAtt,max2_indices[iAtt]-1],
self.res1_iq_vels[iAtt,max2_indices[iAtt]+1])
max2_neighbors[iAtt]=max2_neighbor
self.res1_max2_ratio[iAtt] = self.res1_max2_vels[iAtt]/max2_neighbor
#normalize the new arrays
self.res1_max_vels /= numpy.max(self.res1_max_vels)
self.res1_max_vels *= numpy.max(self.res1_max_ratio)
self.res1_max2_vels /= numpy.max(self.res1_max2_vels)
max_ratio_threshold = 2.5#1.5
rule_of_thumb_offset = 1#2
# require ROTO adjacent elements to be all below the MRT
bool_remove = np.ones(len(self.res1_max_ratio))
for ri in range(len(self.res1_max_ratio)-rule_of_thumb_offset-2):
bool_remove[ri] = bool((self.res1_max_ratio[ri:ri+rule_of_thumb_offset+1]< max_ratio_threshold).all())
guess_atten_idx = np.extract(bool_remove,np.arange(len(self.res1_max_ratio)))
# require the attenuation value to be past the initial peak in MRT
guess_atten_idx = guess_atten_idx[where(guess_atten_idx > argmax(self.res1_max_ratio) )[0]]
if size(guess_atten_idx) >= 1:
if guess_atten_idx[0]+rule_of_thumb_offset < len(self.Res1.atten1s):
guess_atten_idx[0] += rule_of_thumb_offset
guess_atten_idx = int(guess_atten_idx[0])
else:
guess_atten_idx = self.NAttens/2
return {'freq': self.Res1.freq,
'iq_vels': self.res1_iq_vels,
'Is': self.Res1.Is,
'Qs': self.Res1.Qs,
'attens':self.Res1.atten1s}
def loadps(self):
hd5file=openFile(self.initialFile,mode='r')
group = hd5file.getNode('/','r0')
self.freq=empty(0,dtype='float32')
for sweep in group._f_walkNodes('Leaf'):
k=sweep.read()
self.scale = k['scale'][0]
#print "Scale factor is ", self.scale
self.freq=append(self.freq,[k['f0'][0]])
hd5file.close()
class mlClassification():
def __init__(self, initialFile=None):
'''
Implements the machine learning pattern recognition algorithm on IQ velocity data as well as other tests to
choose the optimum attenuation for each resonator
'''
self.nClass = 3
self.xWidth = 40#np.shape(res1_freqs[1])
self.scalexWidth = 0.5
self.oAttDist = -1 # rule of thumb attenuation steps to reach the overpowered peak
#self.uAttDist = +2 # rule of thumb attenuation steps to reach the underpowed peak
self.initialFile = initialFile
self.baseFile = ('.').join(initialFile.split('.')[:-1])
self.PSFile = self.baseFile[:-16] + '.txt'#os.environ['MKID_DATA_DIR']+'20160712/ps_FL1_1.txt' # power sweep fit, .txt
self.trainFile = 'ps_peaks_train_w%i_s%.2f.pkl' % (self.xWidth, self.scalexWidth)
self.trainFrac = 0.8
self.testFrac=1 - self.trainFrac
def makeWindowImage(self, res_num, iAtten, xCenter, showFrames=False, test_if_noisy=False):
'''Using a given x coordinate a frame is created at that location of size xWidth x xWidth, and then flattened
into a 1d array. Called multiple times in each function.
inputs
xCenter: center location of frame in wavelength space
res_num: index of resonator in question
iAtten: index of attenuation in question
self.scalexWidth: typical values: 1/2, 1/4, 1/8
uses interpolation to put data from an xWidth x xWidth grid to a
(xWidth/scalexWidth) x (xWidth/scalexWidth) grid. This allows the
user to probe the spectrum using a smaller window while utilizing
the higher resolution training data
showFrames: pops up a window of the frame plotted using matplotlib.plot
test_if_noisy: test spectrum by comparing heights of the outer quadrants to the center. A similar test to what
the pattern recognition classification does
'''
xWidth= self.xWidth
start = int(xCenter - xWidth/2)
end = int(xCenter + xWidth/2)
scalexWidth = self.scalexWidth
# for spectra where the peak is close enough to the edge that some points falls across the bounadry, pad zeros
if start < 0:
start_diff = abs(start)
start = 0
iq_vels = self.iq_vels[res_num, iAtten, start:end]
iq_vels = np.lib.pad(iq_vels, (start_diff,0), 'constant', constant_values=(0))
elif end >= np.shape(self.freqs)[1]:
iq_vels = self.iq_vels[res_num, iAtten, start:end]
iq_vels = np.lib.pad(iq_vels, (0,end-np.shape(self.freqs)[1]+1), 'constant', constant_values=(0))
else:
iq_vels = self.iq_vels[res_num, iAtten, start:end]
iq_vels = np.round(iq_vels * xWidth / max(self.iq_vels[res_num, iAtten, :]) )
if showFrames:
fig = plt.figure(frameon=False)
fig.add_subplot(111)
plt.plot( iq_vels)
plt.ylim((0,xWidth))
plt.show()
plt.close()
# interpolate iq_vels onto a finer grid
if scalexWidth!=None:
x = np.arange(0, xWidth+1)
iq_vels = np.append(iq_vels, iq_vels[-1])
f = interpolate.interp1d(x, iq_vels)
xnew = np.arange(0, xWidth, scalexWidth)
iq_vels = f(xnew)/ scalexWidth
xWidth = int(xWidth/scalexWidth)
if test_if_noisy:
peak_iqv = mean(iq_vels[int(xWidth/4): int(3*xWidth/4)])
nonpeak_indicies=np.delete(np.arange(xWidth),np.arange(int(xWidth/4),int(3*xWidth/4)))
nonpeak_iqv = iq_vels[nonpeak_indicies]
nonpeak_iqv = mean(nonpeak_iqv[np.where(nonpeak_iqv!=0)]) # since it spans a larger area
noise_condition = 0.7
if (peak_iqv/nonpeak_iqv < noise_condition):
return None
# populates 2d image with ones at location of iq_vel
image = np.zeros((xWidth, xWidth))
for i in range(xWidth-1):
if iq_vels[i]>=xWidth: iq_vels[i] = xWidth-1
if iq_vels[i] < iq_vels[i+1]:
image[int(iq_vels[i]):int(iq_vels[i+1]),i]=1
else:
image[int(iq_vels[i]):int(iq_vels[i-1]),i]=1
if iq_vels[i] == iq_vels[i+1]:
image[int(iq_vels[i]),i]=1
try:
image[map(int,iq_vels), range(xWidth)]=1
except IndexError:
pass
image = image.flatten()
return image
def get_peak_idx(self,res_num,iAtten):
print 'IQVel Shape', shape(self.iq_vels)
print 'res_num, iAtten', res_num, iAtten
return argmax(self.iq_vels[res_num,iAtten,:])
def makeTrainData(self):
'''creates images of each class with associated labels and saves to pkl file
0: saturated peak, too much power
1: goldilocks, not too narrow or short
2: underpowered peak, too little power
outputs
train file.pkl. contains...
trainImages: cube of size- xWidth * xWidth * xCenters*trainFrac
trainLabels: 1d array of size- xCenters*trainFrac
testImages: cube of size- xWidth * xWidth * xCenters*testFrac
testLabels: 1d array of size- xCenters*testFrac
'''
self.freqs, self.iq_vels,self.Is,self.Qs, self.attens = get_PS_data_all_attens(h5File=initialFile, searchAllRes=True)
self.res_nums = np.shape(self.freqs)[0]
if os.path.isfile(self.PSFile):
print 'loading peak location data from %s' % self.PSFile
PSFile = np.loadtxt(self.PSFile, skiprows=0)[:self.res_nums]
print 'psfile shape', PSFile.shape
good_res = np.array(PSFile[:,0]-PSFile[0,0],dtype='int')
self.res_nums = len(good_res)
print 'goodres',good_res
print 'resnums', self.res_nums
#print 'good_res', good_res
opt_freqs = PSFile[:,1]
opt_attens = PSFile[:,2]
self.attens = self.attens[good_res,:]
#print 'Min Atten', min(opt_attens)
print 'opt_attens shape', shape(opt_attens)
print 'self.attens shape', shape(self.attens)
#print 'self.attens', self.attens
#print 'opt attens range', max(opt_attens)-min(opt_attens)
optAttenLocs = np.where(np.transpose(np.transpose(np.array(self.attens))==np.array(opt_attens))) #find optimal atten indices
#print np.array(self.attens)
#print np.array(opt_attens)
#print np.transpose(opt_attens)==self.attens
optAttenExists = optAttenLocs[0]
self.opt_iAttens = optAttenLocs[1]
attenSkips = optAttenLocs[0]-np.arange(len(optAttenLocs[0]))
attenSkips = np.where(np.diff(attenSkips))[0]+1 #indices where opt atten was not found
for resSkip in attenSkips:
print 'resSkip', resSkip
if(opt_attens[resSkip]<self.attens[resSkip,0]):
opt_attens[resSkip] = self.attens[resSkip,0]
self.opt_iAttens = np.insert(self.opt_iAttens,resSkip,0)
elif(opt_attens[resSkip]>self.attens[resSkip,-1]):
opt_attens[resSkip] = self.attens[resSkip,-1]
self.opt_iAttens = np.insert(self.opt_iAttens,resSkip,np.shape(self.attens)[1]-1)
print '129 opt_atten', opt_attens[129]
print '129 atten', self.attens[129,:]
print '129 opt_freq', opt_freqs[129]
print '129 freq', self.freqs[129]
print 'opt_iAtten shape', np.shape(self.opt_iAttens)
#self.opt_iAttens = opt_attens - min(opt_attens)
print 'attenSkipIndices', attenSkips
else:
print 'no PS.txt file found'
exit()
#good_res = np.arange(self.res_nums)
iAttens = np.zeros((self.res_nums,self.nClass))
iAttens[:,0] = self.opt_iAttens[:self.res_nums] + self.oAttDist
iAttens[:,1] = self.opt_iAttens[:self.res_nums] # goldilocks attenuation
iAttens[:,2] = np.ones((self.res_nums))*20#self.opt_iAttens[:self.res_nums] + self.uAttDist
lb_rej = np.where(iAttens[:,0]<0)[0]
print 'lb_rej', lb_rej
if len(lb_rej) != 0:
iAttens = np.delete(iAttens,lb_rej,axis=0) # when index is below zero
good_res = np.delete(good_res,lb_rej)
self.res_nums = self.res_nums-len(lb_rej)
ub_rej = np.where(iAttens[:,2]>len(self.attens))[0]
if len(ub_rej) != 0:
iAttens = np.delete(iAttens,ub_rej,axis=0)
good_res = np.delete(good_res,ub_rej)
self.res_nums = self.res_nums-len(ub_rej)
self.res_indicies = np.zeros((self.res_nums,self.nClass))
print 'goodresshape', np.shape(good_res)
for i, rn in enumerate(good_res):
self.res_indicies[i,0] = self.get_peak_idx(rn,iAttens[i,0])
self.res_indicies[i,1] = self.get_peak_idx(rn,iAttens[i,1])
self.res_indicies[i,2] = self.get_peak_idx(rn,iAttens[i,2])
self.iq_vels=self.iq_vels[good_res]
self.freqs=self.freqs[good_res]
self.Is = self.Is[good_res]
self.Qs = self.Qs[good_res]
trainImages, trainLabels, testImages, testLabels = [], [], [], []
for c in range(self.nClass):
for rn in range(int(self.trainFrac*self.res_nums) ):
image = self.makeWindowImage(res_num = rn, iAtten= iAttens[rn,c], xCenter=self.res_indicies[rn,c])
if image!=None:
trainImages.append(image)
one_hot = np.zeros(self.nClass)
one_hot[c] = 1
trainLabels.append(one_hot)
# A more simple way would be to separate the train and test data after they were read but this did not occur to me
#before most of the code was written
for c in range(self.nClass):
for rn in range(int(self.trainFrac*self.res_nums), int(self.trainFrac*self.res_nums + self.testFrac*self.res_nums)) :
image = self.makeWindowImage(res_num = rn, iAtten= iAttens[rn,c], xCenter=self.res_indicies[rn,c])
if image!=None:
testImages.append(image)
one_hot = np.zeros(self.nClass)
one_hot[c] = 1
testLabels.append(one_hot)
append = None
if os.path.isfile(self.trainFile):
append = raw_input('Do you want to append this training data to previous data [y/n]')
if (append == 'y') or (os.path.isfile(self.trainFile)== False):
print 'saving files %s & to %s' % (self.trainFile, os.path.dirname(os.path.abspath(self.trainFile)) )
with open(self.trainFile, 'ab') as tf:
pickle.dump([trainImages, trainLabels], tf)
pickle.dump([testImages, testLabels], tf)
def mlClass(self):
'''Code adapted from the tensor flow MNIST tutorial 1.
Using training images and labels the machine learning class (mlClass) "learns" how to classify IQ velocity peaks.
Using similar data the ability of mlClass to classify peaks is tested
The training and test matricies are loaded from file (those made earlier if chosen to not be appended to file
will not be used)
'''
if not os.path.isfile(self.trainFile):
self.makeTrainData()
trainImages, trainLabels, testImages, testLabels = loadPkl(self.trainFile)
print 'Number of training images:', np.shape(trainImages)[0], ' Number of test images:', np.shape(testImages)[0]
if self.scalexWidth != None:
self.xWidth = self.xWidth/self.scalexWidth
if np.shape(trainImages)[1]!=self.xWidth**2:
print 'Please make new training images of the correct size'
exit()
self.nClass = np.shape(trainLabels)[1]
self.x = tf.placeholder(tf.float32, [None, self.xWidth**2]) # correspond to the images
self.W = tf.Variable(tf.zeros([self.xWidth**2, self.nClass])) #the weights used to make predictions on classes
self.b = tf.Variable(tf.zeros([self.nClass])) # the biases also used to make class predictions
self.y = tf.nn.softmax(tf.matmul(self.x, self.W) + self.b) # class lables predictions made from x,W,b
y_ = tf.placeholder(tf.float32, [None, self.nClass]) # true class lables identified by user
cross_entropy = -tf.reduce_sum(y_*tf.log(tf.clip_by_value(self.y,1e-10,1.0)) ) # find out how right you are by finding out how wrong you are
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) # the best result is when the wrongness is minimal
init = tf.initialize_all_variables()
self.sess = tf.Session()
self.sess.run(init) # need to do this everytime you want to access a tf variable (for example the true class labels calculation or plotweights)
trainReps = 500
batches = 100
if np.shape(trainLabels)[0]< batches:
batches = np.shape(trainLabels)[0]/2
print 'Performing', trainReps, 'training repeats, using batches of', batches
for i in range(trainReps): #perform the training step using random batches of images and according labels
batch_xs, batch_ys = next_batch(trainImages, trainLabels, batches)
self.sess.run(train_step, feed_dict={self.x: batch_xs, y_: batch_ys}) #calculate train_step using feed_dict
print 'true class labels: ', self.sess.run(tf.argmax(y_,1),
feed_dict={self.x: testImages, y_: testLabels})[:25]
print 'class estimates: ', self.sess.run(tf.argmax(self.y,1),
feed_dict={self.x: testImages, y_: testLabels})[:25] #1st 25 printed
#print self.sess.run(self.y, feed_dict={self.x: testImages, y_: testLabels})[:100] # print the scores for each class
correct_prediction = tf.equal(tf.argmax(self.y,1), tf.argmax(y_,1)) #which ones did it get right?
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
score = self.sess.run(accuracy, feed_dict={self.x: testImages, y_: testLabels}) * 100
print 'Accuracy of model in testing: ', score, '%'
if score < 95: print 'Consider making more training images'
del trainImages, trainLabels, testImages, testLabels
def plotWeights(self):
'''creates a 2d map showing the positive and negative weights for each class'''
weights = self.sess.run(self.W)
weights = np.reshape(weights,(self.xWidth,self.xWidth,self.nClass))
weights = np.flipud(weights)
for nc in range(self.nClass):
plt.imshow(weights[:,:, nc])
plt.title('class %i' % nc)
plt.show()
plt.close()
def checkLoopAtten(self, res_num, iAtten, showLoop=False, min_theta = 135, max_theta = 200, max_ratio_threshold = 1.5):
'''function to check if the IQ loop at a certain attenuation is saturated. 3 checks are made.
if the angle on either side of the sides connected to the longest edge is < min_theta or > max_theta
the loop is saturated. Or if the ratio between the 1st and 2nd largest edge is > max_ratio_threshold.
A True result means that the loop is unsaturated.
Inputs:
res_num: index of resonator in question
iAtten: index of attenuation in question
showLoop: pops up a window of the frame plotted using matplotlib.plot
min/max_theta: limits outside of which the loop is considered saturated
max_ratio_threshold: maximum largest/ 2nd largest IQ velocity allowed before loop is considered saturated
Output:
Boolean. True if unsaturated
'''
vindx = (-self.iq_vels[res_num,iAtten,:]).argsort()[:3]
max_theta_vel = math.atan2(self.Qs[res_num,iAtten,vindx[0]-1] - self.Qs[res_num,iAtten,vindx[0]],
self.Is[res_num,iAtten,vindx[0]-1] - self.Is[res_num,iAtten,vindx[0]])
low_theta_vel = math.atan2(self.Qs[res_num,iAtten,vindx[0]-2] - self.Qs[res_num,iAtten,vindx[0]-1],
self.Is[res_num,iAtten,vindx[0]-2] - self.Is[res_num,iAtten,vindx[0]-1])
upp_theta_vel = math.atan2(self.Qs[res_num,iAtten,vindx[0]] - self.Qs[res_num,iAtten,vindx[0]+1],
self.Is[res_num,iAtten,vindx[0]] - self.Is[res_num,iAtten,vindx[0]+1])
theta1 = (math.pi + max_theta_vel - low_theta_vel)/math.pi * 180
theta2 = (math.pi + upp_theta_vel - max_theta_vel)/math.pi * 180
theta1 = abs(theta1)
if theta1 > 360:
theta1 = theta1-360
theta2= abs(theta2)
if theta2 > 360:
theta2 = theta2-360
max_ratio = self.iq_vels[res_num,iAtten,vindx[0]]/ self.iq_vels[res_num,iAtten,vindx[1]]
if showLoop:
plt.plot(self.Is[res_num,iAtten,:],self.Qs[res_num,iAtten,:])
plt.show()
return bool((max_theta >theta1 > min_theta) *
(max_theta > theta2 > min_theta) *
(max_ratio < max_ratio_threshold))
def findAtten(self, inferenceFile, wsFile, res_nums =20, searchAllRes=True, showFrames = True, usePSFit=True, useWSID=True):
'''The trained machine learning class (mlClass) finds the optimum attenuation for each resonator using peak shapes in IQ velocity
Inputs
inferenceFile: widesweep data file to be used
searchAllRes: if only a few resonator attenuations need to be identified set to False
res_nums: if searchAllRes is False, the number of resonators the atteunation value will be estimated for
usePSFit: if true once all the resonator attenuations have been estimated these values are fed into PSFit which opens
the window where the user can manually check all the peaks have been found and make corrections if neccessary
Outputs
Goodfile: either immediately after the peaks have been located or through WideAna if useWideAna =True
mlFile: temporary file read in to PSFit.py containing an attenuation estimate for each resonator
'''
try:
self.sess
except AttributeError:
print 'You have to train the model first'
exit()
if self.scalexWidth!= None:
self.xWidth=self.xWidth*self.scalexWidth #reset ready for get_PS_data
self.freqs, self.iq_vels, self.Is, self.Qs, self.attens = get_PS_data(h5File=inferenceFile,
searchAllRes=searchAllRes,
res_nums=res_nums)
total_res_nums = np.shape(self.freqs)[0]
print 'number of res in sweep file', total_res_nums
print 'freqs', self.freqs
print 'freqShape', np.shape(self.freqs)
if searchAllRes:
res_nums = total_res_nums
span = range(res_nums)
self.inferenceLabels = np.zeros((res_nums,len(self.attens),self.nClass))
print 'Using trained algorithm on images on each resonator'
skip = []
for i,rn in enumerate(span):
sys.stdout.write("\r%d of %i" % (i+1,res_nums) )
sys.stdout.flush()
for ia in range(len(self.attens)):
# first check the loop for saturation
nonsaturated_loop = self.checkLoopAtten(res_num=rn, iAtten= ia, showLoop=showFrames)
if nonsaturated_loop:
# each image is formatted into a single element of a list so sess.run can receive a single values dictionary
# argument and save memory
image = self.makeWindowImage(res_num = rn, iAtten= ia,
xCenter=self.get_peak_idx(rn,ia),
showFrames=showFrames)
inferenceImage=[]
inferenceImage.append(image) # inferenceImage is just reformatted image
self.inferenceLabels[rn,ia,:] = self.sess.run(self.y, feed_dict={self.x: inferenceImage} )
del inferenceImage
del image
else:
self.inferenceLabels[rn,ia,:] = [1,0,0] # would just skip these if certain
if np.all(self.inferenceLabels[rn,:,1] ==0): # if all loops appear saturated for resonator then set attenuation to highest
self.inferenceLabels[rn,-1,:] = [0,1,0] # or omit from list
#skip.append(rn)
print '\n'
max_2nd_vels = np.zeros((res_nums,len(self.attens)))
for r in range(res_nums):
for iAtten in range(len(self.attens)):
vindx = (-self.iq_vels[r,iAtten,:]).argsort()[:2]
max_2nd_vels[r,iAtten] = self.iq_vels[r,iAtten,vindx[1]]
atten_guess=numpy.zeros((res_nums))
# choose attenuation where there is the maximum in the 2nd highest IQ velocity
for r in range(res_nums):
class_guess = np.argmax(self.inferenceLabels[r,:,:], 1)
if np.any(class_guess==1):
atten_guess[r] = np.where(class_guess==1)[0][argmax(max_2nd_vels[r,:][np.where(class_guess==1)[0]] )]
else:
atten_guess[r] = argmax(self.inferenceLabels[r,:,1])
wsIds = np.loadtxt(wsFile)[:,0]
if usePSFit:
if skip != None:
atten_guess = np.delete(atten_guess,skip)
self.mlFile = ('.').join(inferenceFile.split('.')[:-1]) + '-ml.txt'
if os.path.isfile(self.mlFile):
self.mlFile = self.mlFile+time.strftime("-%Y-%m-%d-%H-%M-%S")
#shutil.copy(self.mlFile, self.mlFile+time.strftime("-%Y-%m-%d-%H-%M-%S"))
print 'wrote', self.mlFile
mlf = open(self.mlFile,'wb') #mlf machine learning file is temporary
for ag in atten_guess:
line = "%12d\n" % ag
mlf.write(line)
mlf.close()
#on ubuntu 14.04 and matplotlib-1.5.1 backend 'Qt4Agg' running matplotlib.show() prior to this causes segmentation fault
os.system("python PSFit.py 1")
#os.remove(self.mlFile)
else:
baseFile = ('.').join(inferenceFile.split('.')[:-1])
#saveFile = baseFile + '.txt'
saveFile = baseFile[:-16] + '.txt'
#saveFile = 'newAttens.txt'
sf = open(saveFile,'wb')
print 'saving file', saveFile
print 'baseFile', baseFile
#sf.write('1\t1\t1\t1 \n')
for r in np.delete(range(len(atten_guess)),skip):
line = "%4i \t %10.9e \t %4i \n" % (wsIds[r], self.freqs[r, self.get_peak_idx(r, atten_guess[r])],
self.attens[atten_guess[r]] )
sf.write(line)
print line
sf.close()
def next_batch(trainImages, trainLabels, batch_size):
'''selects a random batch of batch_size from trainImages and trainLabels'''
perm = random.sample(range(len(trainImages)), batch_size)
trainImages = np.array(trainImages)[perm,:]
trainLabels = np.array(trainLabels)[perm,:]
return trainImages, trainLabels
def loadPkl(filename):
'''load the train and test data to train and test mlClass
pkl file hirerachy is as follows:
-The file is split in two, one side for train data and one side for test data -These halfs are further divdided into image data and labels
-makeTrainData creates image data of size: xWidth * xWidth * res_nums and the label data of size: res_nums
-each time makeTrainData is run a new image cube and label array is created and appended to the old data
so the final size of the file is (xWidth * xWidth * res_nums * "no of train runs") + (res_nums * "no of train runs") + [the equivalent test data structure]
A more simple way would be to separate the train and test data after they were read but this did not occur to the
me before most of the code was written
Input
pkl filename to be read.
Outputs
image cube and label array
'''
file =[]
with open(filename, 'rb') as f:
while 1:
try:
file.append(pickle.load(f))
except EOFError:
break
trainImages = file[0][0]
trainLabels = file[0][1]
testImages = file[1][0]
testLabels = file[1][1]
if np.shape(file)[0]/2 > 1:
for i in range(1, np.shape(file)[0]/2-1):
trainImages = np.append(trainImages, file[2*i][0], axis=0)
trainLabels = np.append(trainLabels, file[2*i][1], axis=0)
testImages = np.append(testImages, file[2*i+1][0], axis=0)
testLabels = np.append(testLabels, file[2*i+1][1], axis=0)
print "loaded dataset from ", filename
return trainImages, trainLabels, testImages, testLabels
def get_PS_data(h5File=None, searchAllRes=False, res_nums=50):
'''A function to read and write all resonator information so stop having to run the PSFit function on all resonators
if running the script more than once. This is used on both the initial and inference file
Inputs:
h5File: the power sweep h5 file for the information to be extracted from. Can be initialFile or inferenceFile
'''
print 'get_PS_data H5 file', h5File
baseFile = ('.').join(h5File.split('.')[:-1])
PSPFile = baseFile[:-16] + '.pkl'
print 'resNums', res_nums
if os.path.isfile(PSPFile):
file = []
with open(PSPFile, 'rb') as f:
for v in range(5):
file.append(pickle.load(f))
if searchAllRes:
res_nums = -1
freqs = file[0][:res_nums]
print 'freqshape get_PS_data', np.shape(freqs)
iq_vels = file[1][:res_nums]
Is = file[2][:res_nums]
Qs= file[3][:res_nums]
attens = file[4]
else:
PSFit = PSFitting(initialFile=h5File)
PSFit.loadps()
tot_res_nums= len(PSFit.freq)
print 'totalResNums in getPSdata', tot_res_nums
if searchAllRes:
res_nums = tot_res_nums
res_size = np.shape(PSFit.loadres()['iq_vels'])
freqs = np.zeros((res_nums, res_size[1]+1))
iq_vels = np.zeros((res_nums, res_size[0], res_size[1]))
Is = np.zeros((res_nums, res_size[0], res_size[1]+1))
Qs = np.zeros((res_nums, res_size[0], res_size[1]+1))
attens = np.zeros((res_size[0]))
for r in range(res_nums):
sys.stdout.write("\r%d of %i" % (r+1,res_nums) )
sys.stdout.flush()
res = PSFit.loadres()
freqs[r,:] =res['freq']
iq_vels[r,:,:] = res['iq_vels']
Is[r,:,:] = res['Is']
Qs[r,:,:] = res['Qs']
attens[:] = res['attens']
PSFit.resnum += 1
with open(PSPFile, "wb") as f:
pickle.dump(freqs, f)
pickle.dump(iq_vels, f)
pickle.dump(Is, f)
pickle.dump(Qs, f)
pickle.dump(attens, f)
return freqs, iq_vels, Is, Qs, attens
def get_PS_data_all_attens(h5File=None, searchAllRes=False, res_nums=50):
'''A function to read and write all resonator information so stop having to run the PSFit function on all resonators
if running the script more than once. This is used on both the initial and inference file
Inputs:
h5File: the power sweep h5 file for the information to be extracted from. Can be initialFile or inferenceFile
'''
print 'get_PS_data_all_attens H5 file', h5File
baseFile = ('.').join(h5File.split('.')[:-1])
PSPFile = baseFile[:-16] + '.pkl'
print 'resNums', res_nums
if os.path.isfile(PSPFile):
file = []
with open(PSPFile, 'rb') as f:
for v in range(5):
file.append(pickle.load(f))
if searchAllRes:
res_nums = -1
freqs = file[0][:]
print 'freqshape get_PS_data', np.shape(freqs)
iq_vels = file[1][:]
Is = file[2][:]
Qs= file[3][:]
attens = file[4]
else:
freqs = file[0][:res_nums]
print 'freqshape get_PS_data', np.shape(freqs)
iq_vels = file[1][:res_nums]
Is = file[2][:res_nums]
Qs= file[3][:res_nums]
attens = file[4]
else:
PSFit = PSFitting(initialFile=h5File)
PSFit.loadps()
tot_res_nums= len(PSFit.freq)
print 'totalResNums in getPSdata', tot_res_nums
if searchAllRes:
res_nums = tot_res_nums
res_size = np.shape(PSFit.loadres()['iq_vels'])
freqs = np.zeros((res_nums, res_size[1]+1))
iq_vels = np.zeros((res_nums, res_size[0], res_size[1]))
Is = np.zeros((res_nums, res_size[0], res_size[1]+1))
Qs = np.zeros((res_nums, res_size[0], res_size[1]+1))
attens = np.zeros((res_nums, res_size[0]))
for r in range(res_nums):
sys.stdout.write("\r%d of %i" % (r+1,res_nums) )
sys.stdout.flush()
res = PSFit.loadres()
freqs[r,:] =res['freq']
iq_vels[r,:,:] = res['iq_vels']
Is[r,:,:] = res['Is']
Qs[r,:,:] = res['Qs']
attens[r,:] = res['attens']
PSFit.resnum += 1
with open(PSPFile, "wb") as f:
pickle.dump(freqs, f)
pickle.dump(iq_vels, f)
pickle.dump(Is, f)
pickle.dump(Qs, f)
pickle.dump(attens, f)
print 'attenshape',shape(attens)
return freqs, iq_vels, Is, Qs, attens
def main(initialFile=None, inferenceFile=None, wsFile=None, res_nums=497):
mlClass = mlClassification(initialFile=initialFile)
#mlClass.makeTrainData()
mlClass.mlClass()
mlClass.plotWeights()
mlClass.findAtten(inferenceFile=inferenceFile, wsFile=wsFile, searchAllRes=True, usePSFit=False, showFrames=False, res_nums=497)
if __name__ == "__main__":
initialFile = None
inferenceFile = None
if len(sys.argv) > 2:
initialFileName = sys.argv[1]
inferenceFileName = sys.argv[2]
wsFileName = sys.argv[3]
#mdd = os.environ['pwd']
initialFile = os.path.join('./mlTrainingData',initialFileName)
#inferenceFile = os.path.join(mdd,inferenceFileName)
#initialFile = initialFileName
inferenceFile = os.path.join('/home/neelay/DarknessData/powerSweeps', inferenceFileName)
wsFile = os.path.join('/home/neelay/DarknessData/wideSweeps', wsFileName)
else:
print "need to specify an initial and inference filename located in MKID_DATA_DIR"
exit()
main(initialFile=initialFile, inferenceFile=inferenceFile, wsFile=wsFile)
| gpl-2.0 |
depet/scikit-learn | examples/linear_model/plot_ols_3d.py | 8 | 2024 | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
=========================================================
Sparsity Example: Fitting only features 1 and 2
=========================================================
Features 1 and 2 of the diabetes-dataset are fitted and
plotted below. It illustrates that although feature 2
has a strong coefficient on the full model, it does not
give us much regarding `y` when compared to just feature 1
"""
print(__doc__)
# Code source: Gaël Varoquaux
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import pylab as pl
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
from sklearn import datasets, linear_model
diabetes = datasets.load_diabetes()
indices = (0, 1)
X_train = diabetes.data[:-20, indices]
X_test = diabetes.data[-20:, indices]
y_train = diabetes.target[:-20]
y_test = diabetes.target[-20:]
ols = linear_model.LinearRegression()
ols.fit(X_train, y_train)
###############################################################################
# Plot the figure
def plot_figs(fig_num, elev, azim, X_train, clf):
fig = pl.figure(fig_num, figsize=(4, 3))
pl.clf()
ax = Axes3D(fig, elev=elev, azim=azim)
ax.scatter(X_train[:, 0], X_train[:, 1], y_train, c='k', marker='+')
ax.plot_surface(np.array([[-.1, -.1], [.15, .15]]),
np.array([[-.1, .15], [-.1, .15]]),
clf.predict(np.array([[-.1, -.1, .15, .15],
[-.1, .15, -.1, .15]]).T
).reshape((2, 2)),
alpha=.5)
ax.set_xlabel('X_1')
ax.set_ylabel('X_2')
ax.set_zlabel('Y')
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
#Generate the three different figures from different views
elev = 43.5
azim = -110
plot_figs(1, elev, azim, X_train, ols)
elev = -.5
azim = 0
plot_figs(2, elev, azim, X_train, ols)
elev = -.5
azim = 90
plot_figs(3, elev, azim, X_train, ols)
pl.show()
| bsd-3-clause |
MohammedWasim/deeppy | examples/convnet_cifar.py | 9 | 2947 | #!/usr/bin/env python
"""
Convnets for image classification (2)
=====================================
"""
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import deeppy as dp
# Fetch CIFAR10 data
dataset = dp.dataset.CIFAR10()
x_train, y_train, x_test, y_test = dataset.data(dp_dtypes=True)
# Normalize pixel intensities
scaler = dp.StandardScaler()
scaler.fit(x_train)
x_train = scaler.transform(x_train)
x_test = scaler.transform(x_test)
# Prepare network inputs
batch_size = 128
train_input = dp.SupervisedInput(x_train, y_train, batch_size=batch_size)
test_input = dp.Input(x_test, batch_size=batch_size)
# Setup network
def conv_layer(n_filters):
return dp.Convolution(
n_filters=32,
filter_shape=(5, 5),
border_mode='full',
weights=dp.Parameter(dp.AutoFiller(gain=1.25), weight_decay=0.003),
)
def pool_layer():
return dp.Pool(
win_shape=(3, 3),
strides=(2, 2),
border_mode='same',
method='max',
)
net = dp.NeuralNetwork(
layers=[
conv_layer(32),
dp.ReLU(),
pool_layer(),
conv_layer(32),
dp.ReLU(),
pool_layer(),
conv_layer(64),
dp.ReLU(),
pool_layer(),
dp.Flatten(),
dp.DropoutFullyConnected(
n_out=64,
weights=dp.Parameter(dp.AutoFiller(gain=1.25), weight_decay=0.03)
),
dp.ReLU(),
dp.FullyConnected(
n_out=dataset.n_classes,
weights=dp.Parameter(dp.AutoFiller(gain=1.25)),
)
],
loss=dp.SoftmaxCrossEntropy(),
)
# Train network
def test_error():
return np.mean(net.predict(test_input) != y_test)
n_epochs = [8, 8]
learn_rate = 0.05
for i, max_epochs in enumerate(n_epochs):
lr = learn_rate/10**i
trainer = dp.StochasticGradientDescent(
max_epochs=max_epochs,
learn_rule=dp.Momentum(learn_rate=lr, momentum=0.9),
)
trainer.train(net, train_input, test_error)
# Evaluate on test data
print('Test error rate: %.4f' % test_error())
# Plot image examples.
def plot_img(img, title):
plt.figure()
plt.imshow(img, interpolation='nearest')
plt.title(title)
plt.axis('off')
plt.tight_layout()
img_bhwc = np.transpose(x_train[:70], (0, 2, 3, 1))
img_tile = dp.misc.img_tile(dp.misc.img_stretch(img_bhwc), aspect_ratio=0.75,
border_color=1.0)
plot_img(img_tile, title='CIFAR10 example images')
# Plot convolutional filters.
filters = [l.weights.array for l in net.layers
if isinstance(l, dp.Convolution)]
fig = plt.figure()
gs = matplotlib.gridspec.GridSpec(2, 2, height_ratios=[1, 3])
subplot_idxs = [0, 2, 3]
for i, f in enumerate(filters):
ax = plt.subplot(gs[subplot_idxs[i]])
ax.imshow(dp.misc.conv_filter_tile(f), cmap='gray',
interpolation='nearest')
ax.set_title('Conv layer %i' % i)
ax.axis('off')
plt.tight_layout()
| mit |
peterwilletts24/Monsoon-Python-Scripts | geopotential/plot_geopotential.py | 1 | 11500 | """
Load mean geopotential heights and plot in colour
"""
import os, sys
import matplotlib.pyplot as plt
import matplotlib.cm as mpl_cm
from mpl_toolkits.basemap import Basemap
import iris
import numpy as np
import imp
import h5py
import cartopy.crs as ccrs
from textwrap import wrap
model_name_convert_title = imp.load_source('util', '/home/pwille/python_scripts/model_name_convert_title.py')
def main():
def unrotate_pole(rotated_lons, rotated_lats, pole_lon, pole_lat):
"""
Convert rotated-pole lons and lats to unrotated ones.
Example::
lons, lats = unrotate_pole(grid_lons, grid_lats, pole_lon, pole_lat)
.. note:: Uses proj.4 to perform the conversion.
"""
src_proj = ccrs.RotatedGeodetic(pole_longitude=pole_lon,
pole_latitude=pole_lat)
target_proj = ccrs.Geodetic()
res = target_proj.transform_points(x=rotated_lons, y=rotated_lats,
src_crs=src_proj)
unrotated_lon = res[..., 0]
unrotated_lat = res[..., 1]
return unrotated_lon, unrotated_lat
# Set rotated pole longitude and latitude, not ideal but easier than trying to find how to get iris to tell me what it is.
plot_levels = [925, 850, 700, 500]
#plot_levels = [500]
experiment_id = 'djzny'
p_levels = [1000, 950, 925, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30, 20, 10]
expmin1 = experiment_id[:-1]
plot_type='mean'
plot_diag='temp'
fname_h = '/projects/cascade/pwille/temp/408_pressure_levels_interp_pressure_%s_%s' % (experiment_id, plot_type)
fname_d = '/projects/cascade/pwille/temp/%s_pressure_levels_interp_%s_%s' % (plot_diag, experiment_id, plot_type)
print fname_h
print fname_d
# Height data file
with h5py.File(fname_h, 'r') as i:
mh = i['%s' % plot_type]
mean_heights = mh[. . .]
print mean_heights.shape
with h5py.File(fname_d, 'r') as i:
mh = i['%s' % plot_type]
mean_var = mh[. . .]
print mean_var.shape
#lon_low= 60
#lon_high = 105
#lat_low = -10
#lat_high = 30
f_oro = '/projects/cascade/pwille/moose_retrievals/%s/%s/33.pp' % (expmin1, experiment_id)
oro = iris.load_cube(f_oro)
fu = '/projects/cascade/pwille/moose_retrievals/%s/%s/30201_mean.pp' % (expmin1, experiment_id)
#fv = '/projects/cascade/pwille/moose_retrievals/%s/%s/30202.pp' % (expmin1, experiment_id)
u_wind,v_wind = iris.load(fu)
print u_wind.shape
lat_w = u_wind.coord('grid_latitude').points
lon_w = u_wind.coord('grid_longitude').points
p_levs = u_wind.coord('pressure').points
lat = oro.coord('grid_latitude').points
lon = oro.coord('grid_longitude').points
lon_low= np.min(lon)
# Wind may have different number of grid points so need to do this twice
lat_w = u_wind.coord('grid_latitude').points
lon_w = u_wind.coord('grid_longitude').points
p_levs = u_wind.coord('pressure').points
lat = oro.coord('grid_latitude').points
lon = oro.coord('grid_longitude').points
cs_w = u_wind.coord_system('CoordSystem')
cs = oro.coord_system('CoordSystem')
if isinstance(cs_w, iris.coord_systems.RotatedGeogCS):
print ' Wind - %s - Unrotate pole %s' % (experiment_id, cs_w)
lons_w, lats_w = np.meshgrid(lon_w, lat_w)
lons_w,lats_w = unrotate_pole(lons_w,lats_w, cs_w.grid_north_pole_longitude, cs_w.grid_north_pole_latitude)
lon_w=lons_w[0]
lat_w=lats_w[:,0]
csur_w=cs_w.ellipsoid
for i, coord in enumerate (u_wind.coords()):
if coord.standard_name=='grid_latitude':
lat_dim_coord_uwind = i
if coord.standard_name=='grid_longitude':
lon_dim_coord_uwind = i
u_wind.remove_coord('grid_latitude')
u_wind.remove_coord('grid_longitude')
u_wind.add_dim_coord(iris.coords.DimCoord(points=lat_w, standard_name='grid_latitude', units='degrees', coord_system=csur_w),lat_dim_coord_uwind )
u_wind.add_dim_coord(iris.coords.DimCoord(points=lon_w, standard_name='grid_longitude', units='degrees', coord_system=csur_w), lon_dim_coord_uwind)
v_wind.remove_coord('grid_latitude')
v_wind.remove_coord('grid_longitude')
v_wind.add_dim_coord(iris.coords.DimCoord(points=lat_w, standard_name='grid_latitude', units='degrees', coord_system=csur_w), lat_dim_coord_uwind)
v_wind.add_dim_coord(iris.coords.DimCoord(points=lon_w, standard_name='grid_longitude', units='degrees', coord_system=csur_w),lon_dim_coord_uwind )
if isinstance(cs, iris.coord_systems.RotatedGeogCS):
print ' 33.pp - %s - Unrotate pole %s' % (experiment_id, cs)
lons, lats = np.meshgrid(lon, lat)
lon_low= np.min(lons)
lon_high = np.max(lons)
lat_low = np.min(lats)
lat_high = np.max(lats)
lon_corners, lat_corners = np.meshgrid((lon_low, lon_high), (lat_low, lat_high))
lons,lats = unrotate_pole(lons,lats, cs.grid_north_pole_longitude, cs.grid_north_pole_latitude)
lon_corner_u,lat_corner_u = unrotate_pole(lon_corners, lat_corners, cs.grid_north_pole_longitude, cs.grid_north_pole_latitude)
#lon_highu,lat_highu = unrotate_pole(lon_high, lat_high, cs.grid_north_pole_longitude, cs.grid_north_pole_latitude)
lon=lons[0]
lat=lats[:,0]
print lon_corners
print lat_corners
print lon_corner_u
print lat_corner_u
print lon_corner_u[0,0]
print lon_corner_u[0,1]
print lat_corner_u[0,0]
print lat_corner_u[1,0]
lon_low = lon_corner_u[0,0]
lon_high = lon_corner_u[0,1]
lat_low = lat_corner_u[0,0]
lat_high = lat_corner_u[1,0]
csur=cs.ellipsoid
for i, coord in enumerate (oro.coords()):
if coord.standard_name=='grid_latitude':
lat_dim_coord_oro = i
if coord.standard_name=='grid_longitude':
lon_dim_coord_oro = i
oro.remove_coord('grid_latitude')
oro.remove_coord('grid_longitude')
oro.add_dim_coord(iris.coords.DimCoord(points=lat, standard_name='grid_latitude', units='degrees', coord_system=csur), lat_dim_coord_oro)
oro.add_dim_coord(iris.coords.DimCoord(points=lon, standard_name='grid_longitude', units='degrees', coord_system=csur), lon_dim_coord_oro)
else:
lons, lats = np.meshgrid(lon, lat)
lons_w, lats_w = np.meshgrid(lon_w, lat_w)
lon_low= np.min(lons)
lon_high = np.max(lons)
lat_low = np.min(lats)
lat_high = np.max(lats)
# 2 degree lats lon lists for wind regridding
lat_wind_1deg = np.arange(lat_low,lat_high, 2)
lon_wind_1deg = np.arange(lon_low,lon_high, 2)
for p in plot_levels:
#m_title = 'Height of %s-hPa level (m)' % (p)
# Set pressure height contour min/max
if p == 925:
clev_min = 680.
clev_max = 810.
elif p == 850:
clev_min = 1435.
clev_max = 1530.
elif p == 700:
clev_min = 3090.
clev_max = 3155.
elif p == 500:
clev_min = 5800.
clev_max = 5890.
else:
print 'Contour min/max not set for this pressure level'
# Set potential temperature min/max
if p == 925:
clevpt_min = 295.
clevpt_max = 310.
elif p == 850:
clevpt_min = 300.
clevpt_max = 320.
elif p == 700:
clevpt_min = 310.
clevpt_max = 325.
elif p == 500:
clevpt_min = 321.
clevpt_max = 335.
else:
print 'Potential temperature min/max not set for this pressure level'
# Set specific humidity min/max
if p == 925:
clevsh_min = 0.012
clevsh_max = 0.022
elif p == 850:
clevsh_min = 0.0035
clevsh_max = 0.018
elif p == 700:
clevsh_min = 0.002
clevsh_max = 0.012
elif p == 500:
clevsh_min = 0.002
clevsh_max = 0.006
else:
print 'Specific humidity min/max not set for this pressure level'
#clevs_col = np.arange(clev_min, clev_max)
clevs_lin = np.linspace(clev_min, clev_max, num=20)
s = np.searchsorted(p_levels[::-1], p)
sc = np.searchsorted(p_levs, p)
# Set plot contour lines for pressure levels
plt_h = mean_heights[:,:,-(s+1)]
plt_h[plt_h==0] = np.nan
# Set plot colours for variable
plt_v = mean_var[:,:,-(s+1)]
plt_v[plt_v==0] = np.nan
# Set u,v for winds, linear interpolate to approx. 1 degree grid
u_interp = u_wind[sc,:,:]
v_interp = v_wind[sc,:,:]
#print u_interp.shape
sample_points = [('grid_latitude', lat_wind_1deg), ('grid_longitude', lon_wind_1deg)]
u = iris.analysis.interpolate.linear(u_interp, sample_points).data
v = iris.analysis.interpolate.linear(v_interp, sample_points).data
lons_w, lats_w = np.meshgrid(lon_wind_1deg, lat_wind_1deg)
m =\
Basemap(llcrnrlon=lon_low,llcrnrlat=lat_low,urcrnrlon=lon_high,urcrnrlat=lat_high,projection='mill')
x, y = m(lons, lats)
x_w, y_w = m(lons_w, lats_w)
print x_w.shape
fig=plt.figure(figsize=(8,8))
ax = fig.add_axes([0.05,0.05,0.9,0.85])
m.drawcoastlines(color='gray')
m.drawcountries(color='gray')
m.drawcoastlines(linewidth=0.5)
#m.fillcontinents(color='#CCFF99')
m.drawparallels(np.arange(-80,81,10),labels=[1,1,0,0])
m.drawmeridians(np.arange(0,360,10),labels=[0,0,0,1])
cs_lin = m.contour(x,y, plt_h, clevs_lin,colors='k',linewidths=0.5)
#wind = m.barbs(x_w,y_w, u, v, length=6)
if plot_diag=='temp':
cs_col = m.contourf(x,y, plt_v, np.linspace(clevpt_min, clevpt_max), cmap=plt.cm.RdBu_r)
cbar = m.colorbar(cs_col,location='bottom',pad="5%", format = '%d')
cbar.set_label('K')
plt.suptitle('Height, Potential Temperature and Wind Vectors at %s hPa'% (p), fontsize=10)
elif plot_diag=='sp_hum':
cs_col = m.contourf(x,y, plt_v, np.linspace(clevsh_min, clevsh_max), cmap=plt.cm.RdBu_r)
cbar = m.colorbar(cs_col,location='bottom',pad="5%", format = '%.3f')
cbar.set_label('kg/kg')
plt.suptitle('Height, Specific Humidity and Wind Vectors at %s hPa'% (p), fontsize=10)
wind = m.quiver(x_w,y_w, u, v, scale=400)
qk = plt.quiverkey(wind, 0.1, 0.1, 5, '5 m/s', labelpos='W')
plt.clabel(cs_lin, fontsize=10, fmt='%d', color='black')
#plt.title('%s\n%s' % (m_title, model_name_convert_title.main(experiment_id)), fontsize=10)
plt.title('\n'.join(wrap('%s' % (model_name_convert_title.main(experiment_id)), 80)), fontsize=10)
#plt.show()
if not os.path.exists('/home/pwille/figures/%s/%s' % (experiment_id, plot_diag)): os.makedirs('/home/pwille/figures/%s/%s' % (experiment_id, plot_diag))
plt.savefig('/home/pwille/figures/%s/%s/geop_height_%shPa_%s_%s.tiff' % (experiment_id, plot_diag, p, experiment_id, plot_diag), format='tiff', transparent=True)
if __name__ == '__main__':
main()
| mit |
maxlikely/scikit-learn | sklearn/kernel_approximation.py | 1 | 15871 | # -*- coding: utf-8 -*-
"""
The :mod:`sklearn.kernel_approximation` module implements several
approximate kernel feature maps base on Fourier transforms.
"""
# Author: Andreas Mueller <[email protected]>
#
# License: BSD Style.
import warnings
import numpy as np
import scipy.sparse as sp
from scipy.linalg import svd
from .base import BaseEstimator
from .base import TransformerMixin
from .utils import array2d, atleast2d_or_csr, check_random_state
from .utils.extmath import safe_sparse_dot
from .metrics.pairwise import pairwise_kernels
class RBFSampler(BaseEstimator, TransformerMixin):
"""Approximates feature map of an RBF kernel by Monte Carlo approximation
of its Fourier transform.
Parameters
----------
gamma: float
Parameter of RBF kernel: exp(-γ × x²)
n_components: int
Number of Monte Carlo samples per original feature.
Equals the dimensionality of the computed feature space.
random_state : {int, RandomState}, optional
If int, random_state is the seed used by the random number generator;
if RandomState instance, random_state is the random number generator.
Notes
-----
See "Random Features for Large-Scale Kernel Machines" by A. Rahimi and
Benjamin Recht.
"""
def __init__(self, gamma=1., n_components=100, random_state=None):
self.gamma = gamma
self.n_components = n_components
self.random_state = random_state
def fit(self, X, y=None):
"""Fit the model with X.
Samples random projection according to n_features.
Parameters
----------
X: {array-like, sparse matrix}, shape (n_samples, n_features)
Training data, where n_samples in the number of samples
and n_features is the number of features.
Returns
-------
self : object
Returns the transformer.
"""
X = atleast2d_or_csr(X)
random_state = check_random_state(self.random_state)
n_features = X.shape[1]
self.random_weights_ = (np.sqrt(self.gamma) * random_state.normal(
size=(n_features, self.n_components)))
self.random_offset_ = random_state.uniform(0, 2 * np.pi,
size=self.n_components)
return self
def transform(self, X, y=None):
"""Apply the approximate feature map to X.
Parameters
----------
X: {array-like, sparse matrix}, shape (n_samples, n_features)
New data, where n_samples in the number of samples
and n_features is the number of features.
Returns
-------
X_new: array-like, shape (n_samples, n_components)
"""
X = atleast2d_or_csr(X)
projection = safe_sparse_dot(X, self.random_weights_)
projection += self.random_offset_
np.cos(projection, projection)
projection *= np.sqrt(2.) / np.sqrt(self.n_components)
return projection
class SkewedChi2Sampler(BaseEstimator, TransformerMixin):
"""Approximates feature map of the "skewed chi-squared" kernel by Monte
Carlo approximation of its Fourier transform.
Parameters
----------
skewedness : float
"skewedness" parameter of the kernel. Needs to be cross-validated.
n_components : int
number of Monte Carlo samples per original feature.
Equals the dimensionality of the computed feature space.
random_state : {int, RandomState}, optional
If int, random_state is the seed used by the random number generator;
if RandomState instance, random_state is the random number generator.
References
----------
See "Random Fourier Approximations for Skewed Multiplicative Histogram
Kernels" by Fuxin Li, Catalin Ionescu and Cristian Sminchisescu.
See also
--------
AdditiveChi2Sampler : A different approach for approximating an additive
variant of the chi squared kernel.
sklearn.metrics.chi2_kernel : The exact chi squared kernel.
"""
def __init__(self, skewedness=1., n_components=100, random_state=None):
self.skewedness = skewedness
self.n_components = n_components
self.random_state = random_state
def fit(self, X, y=None):
"""Fit the model with X.
Samples random projection according to n_features.
Parameters
----------
X: array-like, shape (n_samples, n_features)
Training data, where n_samples in the number of samples
and n_features is the number of features.
Returns
-------
self : object
Returns the transformer.
"""
X = array2d(X)
random_state = check_random_state(self.random_state)
n_features = X.shape[1]
uniform = random_state.uniform(size=(n_features, self.n_components))
# transform by inverse CDF of sech
self.random_weights_ = (1. / np.pi
* np.log(np.tan(np.pi / 2. * uniform)))
self.random_offset_ = random_state.uniform(0, 2 * np.pi,
size=self.n_components)
return self
def transform(self, X, y=None):
"""Apply the approximate feature map to X.
Parameters
----------
X: array-like, shape (n_samples, n_features)
New data, where n_samples in the number of samples
and n_features is the number of features.
Returns
-------
X_new: array-like, shape (n_samples, n_components)
"""
X = array2d(X, copy=True)
if (X < 0).any():
raise ValueError("X may not contain entries smaller than zero.")
X += self.skewedness
np.log(X, X)
projection = safe_sparse_dot(X, self.random_weights_)
projection += self.random_offset_
np.cos(projection, projection)
projection *= np.sqrt(2.) / np.sqrt(self.n_components)
return projection
class AdditiveChi2Sampler(BaseEstimator, TransformerMixin):
"""Approximate feature map for additive chi² kernel.
Uses sampling the fourier transform of the kernel characteristic
at regular intervals.
Since the kernel that is to be approximated is additive, the components of
the input vectors can be treated separately. Each entry in the original
space is transformed into 2×sample_steps+1 features, where sample_steps is
a parameter of the method. Typical values of sample_steps include 1, 2 and
3.
Optimal choices for the sampling interval for certain data ranges can be
computed (see the reference). The default values should be reasonable.
Parameters
----------
sample_steps : int, optional
Gives the number of (complex) sampling points.
sample_interval : float, optional
Sampling interval. Must be specified when sample_steps not in {1,2,3}.
Notes
-----
This estimator approximates a slightly different version of the additive
chi squared kernel then ``metric.additive_chi2`` computes.
See also
--------
SkewedChi2Sampler : A Fourier-approximation to a non-additive variant of
the chi squared kernel.
sklearn.metrics.chi2_kernel : The exact chi squared kernel.
sklearn.metrics.additive_chi2_kernel : The exact additive chi squared
kernel.
References
----------
See `"Efficient additive kernels via explicit feature maps"
<http://eprints.pascal-network.org/archive/00006964/01/vedaldi10.pdf>`_
Vedaldi, A. and Zisserman, A., Computer Vision and Pattern Recognition 2010
"""
def __init__(self, sample_steps=2, sample_interval=None):
self.sample_steps = sample_steps
self.sample_interval = sample_interval
def fit(self, X, y=None):
"""Set parameters."""
X = atleast2d_or_csr(X)
if self.sample_interval is None:
# See reference, figure 2 c)
if self.sample_steps == 1:
self.sample_interval_ = 0.8
elif self.sample_steps == 2:
self.sample_interval_ = 0.5
elif self.sample_steps == 3:
self.sample_interval_ = 0.4
else:
raise ValueError("If sample_steps is not in [1, 2, 3],"
" you need to provide sample_interval")
else:
self.sample_interval_ = self.interval
return self
def transform(self, X, y=None):
"""Apply approximate feature map to X.
Parameters
----------
X: {array-like, sparse matrix}, shape = (n_samples, n_features)
Returns
-------
X_new: {array, sparse matrix}, \
shape = (n_samples, n_features × (2×sample_steps + 1))
Whether the return value is an array of sparse matrix depends on
the type of the input X.
"""
X = atleast2d_or_csr(X)
sparse = sp.issparse(X)
# check if X has negative values. Doesn't play well with np.log.
if ((X.data if sparse else X) < 0).any():
raise ValueError("Entries of X must be non-negative.")
# zeroth component
# 1/cosh = sech
# cosh(0) = 1.0
transf = self._transform_sparse if sparse else self._transform_dense
return transf(X)
def _transform_dense(self, X):
non_zero = (X != 0.0)
X_nz = X[non_zero]
X_step = np.zeros_like(X)
X_step[non_zero] = np.sqrt(X_nz * self.sample_interval_)
X_new = [X_step]
log_step_nz = self.sample_interval_ * np.log(X_nz)
step_nz = 2 * X_nz * self.sample_interval_
for j in range(1, self.sample_steps):
factor_nz = np.sqrt(step_nz /
np.cosh(np.pi * j * self.sample_interval_))
X_step = np.zeros_like(X)
X_step[non_zero] = factor_nz * np.cos(j * log_step_nz)
X_new.append(X_step)
X_step = np.zeros_like(X)
X_step[non_zero] = factor_nz * np.sin(j * log_step_nz)
X_new.append(X_step)
return np.hstack(X_new)
def _transform_sparse(self, X):
indices = X.indices.copy()
indptr = X.indptr.copy()
data_step = np.sqrt(X.data * self.sample_interval_)
X_step = sp.csr_matrix((data_step, indices, indptr),
shape=X.shape, dtype=X.dtype, copy=False)
X_new = [X_step]
log_step_nz = self.sample_interval_ * np.log(X.data)
step_nz = 2 * X.data * self.sample_interval_
for j in range(1, self.sample_steps):
factor_nz = np.sqrt(step_nz /
np.cosh(np.pi * j * self.sample_interval_))
data_step = factor_nz * np.cos(j * log_step_nz)
X_step = sp.csr_matrix((data_step, indices, indptr),
shape=X.shape, dtype=X.dtype, copy=False)
X_new.append(X_step)
data_step = factor_nz * np.sin(j * log_step_nz)
X_step = sp.csr_matrix((data_step, indices, indptr),
shape=X.shape, dtype=X.dtype, copy=False)
X_new.append(X_step)
return sp.hstack(X_new)
class Nystroem(BaseEstimator, TransformerMixin):
"""Approximate a kernel map using a subset of the training data.
Constructs an approximate feature map for an arbitrary kernel
using a subset of the data as basis.
Parameters
----------
kernel : string or callable, default="rbf"
Kernel map to be approximated.
n_components : int
Number of features to construct.
How many data points will be used to construct the mapping.
gamma : float, default=1/n_features.
Parameter for the RBF kernel.
random_state : {int, RandomState}, optional
If int, random_state is the seed used by the random number generator;
if RandomState instance, random_state is the random number generator.
Attributes
----------
`components_` : array, shape (n_components, n_features)
Subset of training points used to construct the feature map.
`component_indices_` : array, shape (n_components)
Indices of ``components_`` in the training set.
`normalization_` : array, shape (n_components, n_components)
Normalization matrix needed for embedding.
Square root of the kernel matrix on ``components_``.
References
----------
* Williams, C.K.I. and Seeger, M.
"Using the Nystrom method to speed up kernel machines",
Advances in neural information processing systems 2001
* T. Yang, Y. Li, M. Mahdavi, R. Jin and Z. Zhou
"Nystroem Method vs Random Fourier Features: A Theoretical and Empirical
Comparison",
Advances in Neural Information Processing Systems 2012
See also
--------
RBFSampler : An approximation to the RBF kernel using random Fourier
features.
sklearn.metric.pairwise.kernel_metrics : List of built-in kernels.
"""
def __init__(self, kernel="rbf", gamma=None, coef0=1, degree=3,
n_components=100, random_state=None):
self.kernel = kernel
self.gamma = gamma
self.coef0 = coef0
self.degree = degree
self.n_components = n_components
self.random_state = random_state
def fit(self, X, y=None):
"""Fit estimator to data.
Samples a subset of training points, computes kernel
on these and computes normalization matrix.
Parameters
----------
X : array-like, shape=(n_samples, n_feature)
Training data.
"""
rnd = check_random_state(self.random_state)
n_samples = X.shape[0]
# get basis vectors
if self.n_components > n_samples:
# XXX should we just bail?
n_components = n_samples
warnings.warn("n_components > n_samples. This is not possible.\n"
"n_components was set to n_samples, which results"
" in inefficient evaluation of the full kernel.")
else:
n_components = self.n_components
n_components = min(n_samples, n_components)
inds = rnd.permutation(n_samples)
basis_inds = inds[:n_components]
basis = X[basis_inds]
if callable(self.kernel):
basis_kernel = self.kernel(basis, basis)
else:
params = {"gamma": self.gamma,
"degree": self.degree,
"coef0": self.coef0}
basis_kernel = pairwise_kernels(basis, metric=self.kernel,
filter_params=True, **params)
# sqrt of kernel matrix on basis vectors
U, S, V = svd(basis_kernel)
self.normalization_ = np.dot(U * 1. / np.sqrt(S), V)
self.components_ = basis
self.component_indices_ = inds
return self
def transform(self, X):
"""Apply feature map to X.
Computes an approximate feature map using the kernel
between some training points and X.
Parameters
----------
X : array-like, shape=(n_samples, n_features)
Data to transform.
Returns
-------
X_transformed : array, shape=(n_samples, n_components)
Transformed data.
"""
if callable(self.kernel):
embedded = self.kernel(X, self.components_)
else:
embedded = pairwise_kernels(X, self.components_,
metric=self.kernel,
gamma=self.gamma)
return np.dot(embedded, self.normalization_.T)
| bsd-3-clause |
miaecle/deepchem | contrib/one_shot_models/multitask_regressor.py | 5 | 8197 | """
Implements a multitask graph-convolutional regression.
"""
from __future__ import print_function
from __future__ import division
from __future__ import unicode_literals
__author__ = "Han Altae-Tran and Bharath Ramsundar"
__copyright__ = "Copyright 2016, Stanford University"
__license__ = "MIT"
import warnings
import os
import sys
import numpy as np
import tensorflow as tf
import sklearn.metrics
import tempfile
from deepchem.utils.save import log
from deepchem.models import Model
from deepchem.nn.copy import Input
from deepchem.nn.copy import Dense
from deepchem.data import pad_features
from deepchem.nn import model_ops
# TODO(rbharath): Find a way to get rid of this import?
from deepchem.models.tf_new_models.graph_topology import merge_dicts
from deepchem.models.tf_new_models.multitask_classifier import get_loss_fn
class MultitaskGraphRegressor(Model):
def __init__(self,
model,
n_tasks,
n_feat,
logdir=None,
batch_size=50,
final_loss='weighted_L2',
learning_rate=.001,
optimizer_type="adam",
learning_rate_decay_time=1000,
beta1=.9,
beta2=.999,
pad_batches=True,
verbose=True):
warnings.warn(
"MultitaskGraphRegressor is deprecated. "
"Will be removed in DeepChem 1.4.", DeprecationWarning)
super(MultitaskGraphRegressor, self).__init__(
model_dir=logdir, verbose=verbose)
self.n_tasks = n_tasks
self.final_loss = final_loss
self.model = model
self.sess = tf.Session(graph=self.model.graph)
with self.model.graph.as_default():
# Extract model info
self.batch_size = batch_size
self.pad_batches = pad_batches
# Get graph topology for x
self.graph_topology = self.model.get_graph_topology()
self.feat_dim = n_feat
# Building outputs
self.outputs = self.build()
self.loss_op = self.add_training_loss(self.final_loss, self.outputs)
self.learning_rate = learning_rate
self.T = learning_rate_decay_time
self.optimizer_type = optimizer_type
self.optimizer_beta1 = beta1
self.optimizer_beta2 = beta2
# Set epsilon
self.epsilon = 1e-7
self.add_optimizer()
# Initialize
self.init_fn = tf.global_variables_initializer()
self.sess.run(self.init_fn)
# Path to save checkpoint files, which matches the
# replicated supervisor's default path.
self._save_path = os.path.join(self.model_dir, 'model.ckpt')
def build(self):
# Create target inputs
self.label_placeholder = tf.placeholder(
dtype='float32', shape=(None, self.n_tasks), name="label_placeholder")
self.weight_placeholder = tf.placeholder(
dtype='float32', shape=(None, self.n_tasks), name="weight_placholder")
feat = self.model.return_outputs()
feat_size = feat.get_shape()[-1].value
outputs = []
for task in range(self.n_tasks):
outputs.append(
tf.squeeze(
model_ops.fully_connected_layer(
tensor=feat,
size=1,
weight_init=tf.truncated_normal(
shape=[feat_size, 1], stddev=0.01),
bias_init=tf.constant(value=0., shape=[1]))))
return outputs
def add_optimizer(self):
if self.optimizer_type == "adam":
self.optimizer = tf.train.AdamOptimizer(
self.learning_rate,
beta1=self.optimizer_beta1,
beta2=self.optimizer_beta2,
epsilon=self.epsilon)
else:
raise ValueError("Optimizer type not recognized.")
# Get train function
self.train_op = self.optimizer.minimize(self.loss_op)
def construct_feed_dict(self, X_b, y_b=None, w_b=None, training=True):
"""Get initial information about task normalization"""
# TODO(rbharath): I believe this is total amount of data
n_samples = len(X_b)
if y_b is None:
y_b = np.zeros((n_samples, self.n_tasks))
if w_b is None:
w_b = np.zeros((n_samples, self.n_tasks))
targets_dict = {self.label_placeholder: y_b, self.weight_placeholder: w_b}
# Get graph information
atoms_dict = self.graph_topology.batch_to_feed_dict(X_b)
# TODO (hraut->rhbarath): num_datapoints should be a vector, with ith element being
# the number of labeled data points in target_i. This is to normalize each task
# num_dat_dict = {self.num_datapoints_placeholder : self.}
# Get other optimizer information
# TODO(rbharath): Figure out how to handle phase appropriately
feed_dict = merge_dicts([targets_dict, atoms_dict])
return feed_dict
def add_training_loss(self, final_loss, outputs):
"""Computes loss using logits."""
loss_fn = get_loss_fn(final_loss) # Get loss function
task_losses = []
# label_placeholder of shape (batch_size, n_tasks). Split into n_tasks
# tensors of shape (batch_size,)
task_labels = tf.split(
axis=1, num_or_size_splits=self.n_tasks, value=self.label_placeholder)
task_weights = tf.split(
axis=1, num_or_size_splits=self.n_tasks, value=self.weight_placeholder)
for task in range(self.n_tasks):
task_label_vector = task_labels[task]
task_weight_vector = task_weights[task]
task_loss = loss_fn(outputs[task], tf.squeeze(task_label_vector),
tf.squeeze(task_weight_vector))
task_losses.append(task_loss)
# It's ok to divide by just the batch_size rather than the number of nonzero
# examples (effect averages out)
total_loss = tf.add_n(task_losses)
total_loss = tf.math.divide(total_loss, self.batch_size)
return total_loss
def fit(self,
dataset,
nb_epoch=10,
max_checkpoints_to_keep=5,
log_every_N_batches=50,
checkpoint_interval=10,
**kwargs):
# Perform the optimization
log("Training for %d epochs" % nb_epoch, self.verbose)
# TODO(rbharath): Disabling saving for now to try to debug.
for epoch in range(nb_epoch):
log("Starting epoch %d" % epoch, self.verbose)
for batch_num, (X_b, y_b, w_b, ids_b) in enumerate(
dataset.iterbatches(self.batch_size, pad_batches=self.pad_batches)):
if batch_num % log_every_N_batches == 0:
log("On batch %d" % batch_num, self.verbose)
self.sess.run(
self.train_op, feed_dict=self.construct_feed_dict(X_b, y_b, w_b))
def save(self):
"""
No-op since this model doesn't currently support saving...
"""
pass
def predict(self, dataset, transformers=[], **kwargs):
"""Wraps predict to set batch_size/padding."""
return super(MultitaskGraphRegressor, self).predict(
dataset, transformers, batch_size=self.batch_size)
def predict_on_batch(self, X):
"""Return model output for the provided input.
"""
if self.pad_batches:
X = pad_features(self.batch_size, X)
# run eval data through the model
n_tasks = self.n_tasks
with self.sess.as_default():
feed_dict = self.construct_feed_dict(X)
# Shape (n_samples, n_tasks)
batch_outputs = self.sess.run(self.outputs, feed_dict=feed_dict)
n_samples = len(X)
outputs = np.zeros((n_samples, self.n_tasks))
for task, output in enumerate(batch_outputs):
outputs[:, task] = output
return outputs
def get_num_tasks(self):
"""Needed to use Model.predict() from superclass."""
return self.n_tasks
class DTNNMultitaskGraphRegressor(MultitaskGraphRegressor):
def build(self):
# Create target inputs
warnings.warn(
"DTNNMultitaskGraphRegressor is deprecated. "
"Will be removed in DeepChem 1.4.", DeprecationWarning)
self.label_placeholder = tf.placeholder(
dtype='float32', shape=(None, self.n_tasks), name="label_placeholder")
self.weight_placeholder = tf.placeholder(
dtype='float32', shape=(None, self.n_tasks), name="weight_placholder")
feat = self.model.return_outputs()
outputs = []
for task in range(self.n_tasks):
outputs.append(feat[:, task])
return outputs
| mit |
glennq/scikit-learn | examples/gaussian_process/plot_gpc_isoprobability.py | 64 | 3049 | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
=================================================================
Iso-probability lines for Gaussian Processes classification (GPC)
=================================================================
A two-dimensional classification example showing iso-probability lines for
the predicted probabilities.
"""
print(__doc__)
# Author: Vincent Dubourg <[email protected]>
# Adapted to GaussianProcessClassifier:
# Jan Hendrik Metzen <[email protected]>
# License: BSD 3 clause
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import cm
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import DotProduct, ConstantKernel as C
# A few constants
lim = 8
def g(x):
"""The function to predict (classification will then consist in predicting
whether g(x) <= 0 or not)"""
return 5. - x[:, 1] - .5 * x[:, 0] ** 2.
# Design of experiments
X = np.array([[-4.61611719, -6.00099547],
[4.10469096, 5.32782448],
[0.00000000, -0.50000000],
[-6.17289014, -4.6984743],
[1.3109306, -6.93271427],
[-5.03823144, 3.10584743],
[-2.87600388, 6.74310541],
[5.21301203, 4.26386883]])
# Observations
y = np.array(g(X) > 0, dtype=int)
# Instanciate and fit Gaussian Process Model
kernel = C(0.1, (1e-5, np.inf)) * DotProduct(sigma_0=0.1) ** 2
gp = GaussianProcessClassifier(kernel=kernel)
gp.fit(X, y)
print("Learned kernel: %s " % gp.kernel_)
# Evaluate real function and the predicted probability
res = 50
x1, x2 = np.meshgrid(np.linspace(- lim, lim, res),
np.linspace(- lim, lim, res))
xx = np.vstack([x1.reshape(x1.size), x2.reshape(x2.size)]).T
y_true = g(xx)
y_prob = gp.predict_proba(xx)[:, 1]
y_true = y_true.reshape((res, res))
y_prob = y_prob.reshape((res, res))
# Plot the probabilistic classification iso-values
fig = plt.figure(1)
ax = fig.gca()
ax.axes.set_aspect('equal')
plt.xticks([])
plt.yticks([])
ax.set_xticklabels([])
ax.set_yticklabels([])
plt.xlabel('$x_1$')
plt.ylabel('$x_2$')
cax = plt.imshow(y_prob, cmap=cm.gray_r, alpha=0.8,
extent=(-lim, lim, -lim, lim))
norm = plt.matplotlib.colors.Normalize(vmin=0., vmax=0.9)
cb = plt.colorbar(cax, ticks=[0., 0.2, 0.4, 0.6, 0.8, 1.], norm=norm)
cb.set_label('${\\rm \mathbb{P}}\left[\widehat{G}(\mathbf{x}) \leq 0\\right]$')
plt.clim(0, 1)
plt.plot(X[y <= 0, 0], X[y <= 0, 1], 'r.', markersize=12)
plt.plot(X[y > 0, 0], X[y > 0, 1], 'b.', markersize=12)
cs = plt.contour(x1, x2, y_true, [0.], colors='k', linestyles='dashdot')
cs = plt.contour(x1, x2, y_prob, [0.666], colors='b',
linestyles='solid')
plt.clabel(cs, fontsize=11)
cs = plt.contour(x1, x2, y_prob, [0.5], colors='k',
linestyles='dashed')
plt.clabel(cs, fontsize=11)
cs = plt.contour(x1, x2, y_prob, [0.334], colors='r',
linestyles='solid')
plt.clabel(cs, fontsize=11)
plt.show()
| bsd-3-clause |
low-sky/simscript | designs8_big.py | 1 | 9485 | import numpy as np
import matplotlib.pyplot as plt
import os
import astropy.units as u
import astropy.constants as const
import subprocess
import shutil
from astropy.table import Table
values = np.loadtxt('Design6.csv',skiprows=1,delimiter=',')
rootdir = os.path.expanduser("~")+'/SimSuite8/'
rootname = 'DesignBig'
fid_rootname = 'FiducialBig'
GenerateFields = True
Fiducials = True
NFiducials = 1
# Domain Definition
# Fixed parameters for this simulation.
BoxSize = 10*u.pc
Tvals = 10.0*u.K*np.ones(values.shape[0])
kMin = 2*np.ones(values.shape[0]).astype('int')
kMax = 4*np.ones(values.shape[0]).astype('int')
RootGridSize = 512
# Design parameters
logVPmin = np.log10(2)
logVPmax = logVPmin+np.log10(5)
bmin = 0.33
bmax = 0.66
logbetamin = -0.3
logbetamax = 0.3
MachMin = 5
MachMax = 12
np.random.seed(649810806)
seeds = np.random.randint(long(2)**24,size=values.shape[0])
logVPvals = logVPmin+(logVPmax-logVPmin)*values[:,0]
logbetavals = logbetamin + (logbetamax-logbetamin)*values[:,2]
#fundamental design params
bvals = bmin + (bmax-bmin)*values[:,1]
MachVals = MachMin + (MachMax-MachMin)*values[:,3]
betavals = (1e1**logbetavals)
VPvals = (1e1**logVPvals)
kminVals = 2+2*values[:,4]
kmaxVals = 2*kminVals
#Derived parameters
SoundSpeed = ((const.k_B*Tvals/(2.33*const.m_n))**(0.5)).to(u.cm/u.s)
density = ((5*MachVals**2*SoundSpeed**2)/
(6*const.G*VPvals*BoxSize**2)).to(u.g/u.cm**3)
tcross = (BoxSize/(MachVals*SoundSpeed)).to(u.s)
Bvals = ((8*np.pi*density*SoundSpeed**2/betavals)**(0.5)).value*(u.G)
params = Table([Tvals,bvals,Bvals,MachVals,kMin,kMax,seeds],\
names=('Kinetic Temperature','Solenoidal Fraction','Magnetic Field','Mach Number','kMin','kMax','Seed'))
params['Virial Parameter'] = VPvals
params['Plasma Beta'] = betavals
params['Index'] = np.arange(values.shape[0]).astype('int')
params['Sound Speed']=SoundSpeed.to(u.cm/u.s)
params['Crossing Time']=tcross.to(u.s)
params['PL Index']=2*np.ones(values.shape[0])
params['Box Size']=RootGridSize*np.ones(values.shape[0]).astype('int')
params['Density']=density
params['kMin'] = kminVals
params['kMax'] = kmaxVals
# for idx,bval in enumerate(bvals):
# dirname = rootdir+rootname+str(params['Index'][idx])+'/'
# if not os.path.isdir(dirname):
# os.makedirs(dirname)
# # Generate a driving field into the RandomField1,2,3 etc.
# if GenerateFields:
# callstring = "perturbation_enzo.py --size={5} --alpha={4} --kmin={0} --kmax={1} --f_solenoidal={2:.3f} --seed={3}".\
# format(params[idx]['kMin'],
# params[idx]['kMax'],
# params[idx]['Solenoidal Fraction'],
# params[idx]['Seed'],
# params[idx]['PL Index'],
# params[idx]['Box Size'])
# callstring = 'python '+os.getcwd()+'/'+callstring
# print(callstring)
# subprocess.call(callstring,shell=True)
# shutil.move(os.getcwd()+'/RandomField1',dirname)
# shutil.move(os.getcwd()+'/RandomField2',dirname)
# shutil.move(os.getcwd()+'/RandomField3',dirname)
# shutil.copy(os.getcwd()+'/templates/MHDstart.pbs',dirname)
# shutil.copy(os.getcwd()+'/templates/MHDrestart.pbs',dirname)
# shutil.copy(os.getcwd()+'/templates/DrivenTurbulenceCTMHD',dirname)
# with open(dirname+'DrivenTurbulenceCTMHD','a') as template:
# template.write('TopGridDimensions = {0} {1} {2}\n'.\
# format(params[idx]['Box Size'],
# params[idx]['Box Size'],
# params[idx]['Box Size']))
# template.write('InitialBfield = {0:6e}\n'.\
# format(params[idx]['Magnetic Field']/\
# np.sqrt(4*np.pi)))
# # This 4pi is for CTMHD units
# template.write('MachNumber = {0:4f}\n'.\
# format(params[idx]['Mach Number']))
# template.write('RandomForcingMachNumber = {0:4f}\n'.\
# format(params[idx]['Mach Number']))
# template.write('RandomSeed = {0}\n'.\
# format(params[idx]['Seed']))
# template.write('EOSSoundSpeed = {0:6e}\n'.\
# format(params[idx]['Sound Speed']))
# template.write('SoundVelocity = {0:6e}\n'.\
# format(params[idx]['Sound Speed']))
# template.write('TimeUnits = {0:6e}\n'.\
# format(params[idx]['Crossing Time']))
# template.write('Density = {0:6e}\n'.\
# format(params[idx]['Density']))
# template.close()
# print(dirname)
# params.write('d8_parameter_table.ascii',format='ascii.fixed_width')
# params.write('d8_parameter_table.csv',format='ascii',delimiter=',')
if Fiducials:
fidval = 0.5*np.ones(NFiducials)
seeds = np.random.randint(long(2)**24,size=(NFiducials))
logVPvals = logVPmin+(logVPmax-logVPmin)*fidval
logbetavals = logbetamin + (logbetamax-logbetamin)*fidval
Tvals = 10*np.ones(NFiducials)*u.K
kMin = 2*np.ones(NFiducials).astype('int')
kMax = 8*np.ones(NFiducials).astype('int')
bvals = bmin + (bmax-bmin)*fidval
MachVals = MachMin + (MachMax-MachMin)*fidval
betavals = (1e1**logbetavals)
VPvals = (1e1**logVPvals)
SoundSpeed = ((const.k_B*Tvals/(2.33*const.m_n))**(0.5)).to(u.cm/u.s)
density = ((5*MachVals**2*SoundSpeed**2)/
(6*const.G*VPvals*BoxSize**2)).to(u.g/u.cm**3)
tcross = (BoxSize/(MachVals*SoundSpeed)).to(u.s)
Bvals = ((8*np.pi*density*SoundSpeed**2/betavals)**(0.5)).value*(u.G)
kminVals = 2*np.ones(len(betavals))
kmaxVals = 8*np.ones(len(betavals))
fparams = Table([Tvals,bvals,Bvals,MachVals,kMin,kMax,seeds],\
names=('Kinetic Temperature','Solenoidal Fraction',
'Magnetic Field','Mach Number',
'kMin','kMax','Seed'))
fparams['Virial Parameter'] = VPvals
fparams['Plasma Beta'] = betavals
fparams['Index'] = np.arange(NFiducials).astype('int')
fparams['Sound Speed']=SoundSpeed.to(u.cm/u.s)
fparams['Crossing Time']=tcross.to(u.s)
fparams['PL Index']=2*np.ones(NFiducials)
fparams['Box Size']=RootGridSize*np.ones(NFiducials).astype('int')
fparams['Density']=density
fparams['kMin'] = kminVals
fparams['kMax'] = kmaxVals
for idx,bval in enumerate(bvals):
dirname = rootdir+fid_rootname+str(fparams['Index'][idx])+'/'
if not os.path.isdir(dirname):
os.makedirs(dirname)
# Generate a driving field into the RandomField1,2,3 etc.
if GenerateFields:
callstring = "perturbation_enzo.py --size={5} --alpha={4} --kmin={0} --kmax={1} --f_solenoidal={2:.3f} --seed={3}".\
format(fparams[idx]['kMin'],
fparams[idx]['kMax'],
fparams[idx]['Solenoidal Fraction'],
fparams[idx]['Seed'],
fparams[idx]['PL Index'],
fparams[idx]['Box Size'])
callstring = 'python '+os.getcwd()+'/'+callstring
print(callstring)
subprocess.call(callstring,shell=True)
shutil.move(os.getcwd()+'/RandomField1',dirname)
shutil.move(os.getcwd()+'/RandomField2',dirname)
shutil.move(os.getcwd()+'/RandomField3',dirname)
shutil.copy(os.getcwd()+'/templates/MHDstart.pbs',dirname)
shutil.copy(os.getcwd()+'/templates/MHDrestart.pbs',dirname)
shutil.copy(os.getcwd()+'/templates/DrivenTurbulenceCTMHD',dirname)
with open(dirname+'DrivenTurbulenceCTMHD','a') as template:
template.write('TopGridDimensions = {0} {1} {2}\n'.\
format(fparams[idx]['Box Size'],
fparams[idx]['Box Size'],
fparams[idx]['Box Size']))
template.write('InitialBfield = {0:6e}\n'.\
format(fparams[idx]['Magnetic Field']/\
np.sqrt(4*np.pi)))
# This 4pi is for CTMHD units
template.write('MachNumber = {0:4f}\n'.\
format(fparams[idx]['Mach Number']))
template.write('RandomForcingMachNumber = {0:4f}\n'.\
format(params[idx]['Mach Number']))
template.write('RandomSeed = {0}\n'.\
format(fparams[idx]['Seed']))
template.write('EOSSoundSpeed = {0:6e}\n'.\
format(fparams[idx]['Sound Speed']))
template.write('SoundVelocity = {0:6e}\n'.\
format(fparams[idx]['Sound Speed']))
template.write('TimeUnits = {0:6e}\n'.\
format(fparams[idx]['Crossing Time']))
template.write('Density = {0:6e}\n'.\
format(fparams[idx]['Density']))
template.close()
print(dirname)
fparams.write('d8_fiducial_table.ascii',format='ascii.fixed_width')
fparams.write('d8_fiducial_table.csv',format='ascii',delimiter=',')
| gpl-2.0 |
bnaul/scikit-learn | examples/linear_model/plot_theilsen.py | 76 | 3848 | """
====================
Theil-Sen Regression
====================
Computes a Theil-Sen Regression on a synthetic dataset.
See :ref:`theil_sen_regression` for more information on the regressor.
Compared to the OLS (ordinary least squares) estimator, the Theil-Sen
estimator is robust against outliers. It has a breakdown point of about 29.3%
in case of a simple linear regression which means that it can tolerate
arbitrary corrupted data (outliers) of up to 29.3% in the two-dimensional
case.
The estimation of the model is done by calculating the slopes and intercepts
of a subpopulation of all possible combinations of p subsample points. If an
intercept is fitted, p must be greater than or equal to n_features + 1. The
final slope and intercept is then defined as the spatial median of these
slopes and intercepts.
In certain cases Theil-Sen performs better than :ref:`RANSAC
<ransac_regression>` which is also a robust method. This is illustrated in the
second example below where outliers with respect to the x-axis perturb RANSAC.
Tuning the ``residual_threshold`` parameter of RANSAC remedies this but in
general a priori knowledge about the data and the nature of the outliers is
needed.
Due to the computational complexity of Theil-Sen it is recommended to use it
only for small problems in terms of number of samples and features. For larger
problems the ``max_subpopulation`` parameter restricts the magnitude of all
possible combinations of p subsample points to a randomly chosen subset and
therefore also limits the runtime. Therefore, Theil-Sen is applicable to larger
problems with the drawback of losing some of its mathematical properties since
it then works on a random subset.
"""
# Author: Florian Wilhelm -- <[email protected]>
# License: BSD 3 clause
import time
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression, TheilSenRegressor
from sklearn.linear_model import RANSACRegressor
print(__doc__)
estimators = [('OLS', LinearRegression()),
('Theil-Sen', TheilSenRegressor(random_state=42)),
('RANSAC', RANSACRegressor(random_state=42)), ]
colors = {'OLS': 'turquoise', 'Theil-Sen': 'gold', 'RANSAC': 'lightgreen'}
lw = 2
# #############################################################################
# Outliers only in the y direction
np.random.seed(0)
n_samples = 200
# Linear model y = 3*x + N(2, 0.1**2)
x = np.random.randn(n_samples)
w = 3.
c = 2.
noise = 0.1 * np.random.randn(n_samples)
y = w * x + c + noise
# 10% outliers
y[-20:] += -20 * x[-20:]
X = x[:, np.newaxis]
plt.scatter(x, y, color='indigo', marker='x', s=40)
line_x = np.array([-3, 3])
for name, estimator in estimators:
t0 = time.time()
estimator.fit(X, y)
elapsed_time = time.time() - t0
y_pred = estimator.predict(line_x.reshape(2, 1))
plt.plot(line_x, y_pred, color=colors[name], linewidth=lw,
label='%s (fit time: %.2fs)' % (name, elapsed_time))
plt.axis('tight')
plt.legend(loc='upper left')
plt.title("Corrupt y")
# #############################################################################
# Outliers in the X direction
np.random.seed(0)
# Linear model y = 3*x + N(2, 0.1**2)
x = np.random.randn(n_samples)
noise = 0.1 * np.random.randn(n_samples)
y = 3 * x + 2 + noise
# 10% outliers
x[-20:] = 9.9
y[-20:] += 22
X = x[:, np.newaxis]
plt.figure()
plt.scatter(x, y, color='indigo', marker='x', s=40)
line_x = np.array([-3, 10])
for name, estimator in estimators:
t0 = time.time()
estimator.fit(X, y)
elapsed_time = time.time() - t0
y_pred = estimator.predict(line_x.reshape(2, 1))
plt.plot(line_x, y_pred, color=colors[name], linewidth=lw,
label='%s (fit time: %.2fs)' % (name, elapsed_time))
plt.axis('tight')
plt.legend(loc='upper left')
plt.title("Corrupt x")
plt.show()
| bsd-3-clause |
qifeigit/scikit-learn | examples/svm/plot_svm_anova.py | 250 | 2000 | """
=================================================
SVM-Anova: SVM with univariate feature selection
=================================================
This example shows how to perform univariate feature before running a SVC
(support vector classifier) to improve the classification scores.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets, feature_selection, cross_validation
from sklearn.pipeline import Pipeline
###############################################################################
# Import some data to play with
digits = datasets.load_digits()
y = digits.target
# Throw away data, to be in the curse of dimension settings
y = y[:200]
X = digits.data[:200]
n_samples = len(y)
X = X.reshape((n_samples, -1))
# add 200 non-informative features
X = np.hstack((X, 2 * np.random.random((n_samples, 200))))
###############################################################################
# Create a feature-selection transform and an instance of SVM that we
# combine together to have an full-blown estimator
transform = feature_selection.SelectPercentile(feature_selection.f_classif)
clf = Pipeline([('anova', transform), ('svc', svm.SVC(C=1.0))])
###############################################################################
# Plot the cross-validation score as a function of percentile of features
score_means = list()
score_stds = list()
percentiles = (1, 3, 6, 10, 15, 20, 30, 40, 60, 80, 100)
for percentile in percentiles:
clf.set_params(anova__percentile=percentile)
# Compute cross-validation score using all CPUs
this_scores = cross_validation.cross_val_score(clf, X, y, n_jobs=1)
score_means.append(this_scores.mean())
score_stds.append(this_scores.std())
plt.errorbar(percentiles, score_means, np.array(score_stds))
plt.title(
'Performance of the SVM-Anova varying the percentile of features selected')
plt.xlabel('Percentile')
plt.ylabel('Prediction rate')
plt.axis('tight')
plt.show()
| bsd-3-clause |
sahat/bokeh | bokeh/mplexporter/utils.py | 5 | 11306 | """
Utility Routines for Working with Matplotlib Objects
====================================================
"""
import itertools
import io
import base64
import numpy as np
import warnings
import matplotlib
from matplotlib.colors import colorConverter
from matplotlib.path import Path
from matplotlib.markers import MarkerStyle
from matplotlib.transforms import Affine2D
from matplotlib import ticker
def color_to_hex(color):
"""Convert matplotlib color code to hex color code"""
if color is None or colorConverter.to_rgba(color)[3] == 0:
return 'none'
else:
rgb = colorConverter.to_rgb(color)
return '#{0:02X}{1:02X}{2:02X}'.format(*(int(255 * c) for c in rgb))
def many_to_one(input_dict):
"""Convert a many-to-one mapping to a one-to-one mapping"""
return dict((key, val)
for keys, val in input_dict.items()
for key in keys)
LINESTYLES = many_to_one({('solid', '-', (None, None)): "10,0",
('dashed', '--'): "6,6",
('dotted', ':'): "2,2",
('dashdot', '-.'): "4,4,2,4",
('', ' ', 'None', 'none'): "none"})
def get_dasharray(obj, i=None):
"""Get an SVG dash array for the given matplotlib linestyle
Parameters
----------
obj : matplotlib object
The matplotlib line or path object, which must have a get_linestyle()
method which returns a valid matplotlib line code
i : integer (optional)
Returns
-------
dasharray : string
The HTML/SVG dasharray code associated with the object.
"""
if obj.__dict__.get('_dashSeq', None) is not None:
return ','.join(map(str, obj._dashSeq))
else:
ls = obj.get_linestyle()
if i is not None:
ls = ls[i]
dasharray = LINESTYLES.get(ls, None)
if dasharray is None:
warnings.warn("dash style '{0}' not understood: "
"defaulting to solid.".format(ls))
dasharray = LINESTYLES['-']
return dasharray
PATH_DICT = {Path.LINETO: 'L',
Path.MOVETO: 'M',
Path.CURVE3: 'S',
Path.CURVE4: 'C',
Path.CLOSEPOLY: 'Z'}
def SVG_path(path, transform=None, simplify=False):
"""Construct the vertices and SVG codes for the path
Parameters
----------
path : matplotlib.Path object
transform : matplotlib transform (optional)
if specified, the path will be transformed before computing the output.
Returns
-------
vertices : array
The shape (M, 2) array of vertices of the Path. Note that some Path
codes require multiple vertices, so the length of these vertices may
be longer than the list of path codes.
path_codes : list
A length N list of single-character path codes, N <= M. Each code is
a single character, in ['L','M','S','C','Z']. See the standard SVG
path specification for a description of these.
"""
if transform is not None:
path = path.transformed(transform)
vc_tuples = [(vertices if path_code != Path.CLOSEPOLY else [],
PATH_DICT[path_code])
for (vertices, path_code)
in path.iter_segments(simplify=simplify)]
if not vc_tuples:
# empty path is a special case
return np.zeros((0, 2)), []
else:
vertices, codes = zip(*vc_tuples)
vertices = np.array(list(itertools.chain(*vertices))).reshape(-1, 2)
return vertices, list(codes)
def get_path_style(path, fill=True):
"""Get the style dictionary for matplotlib path objects"""
style = {}
style['alpha'] = path.get_alpha()
if style['alpha'] is None:
style['alpha'] = 1
style['edgecolor'] = color_to_hex(path.get_edgecolor())
if fill:
style['facecolor'] = color_to_hex(path.get_facecolor())
else:
style['facecolor'] = 'none'
style['edgewidth'] = path.get_linewidth()
style['dasharray'] = get_dasharray(path)
style['zorder'] = path.get_zorder()
return style
def get_line_style(line):
"""Get the style dictionary for matplotlib line objects"""
style = {}
style['alpha'] = line.get_alpha()
if style['alpha'] is None:
style['alpha'] = 1
style['color'] = color_to_hex(line.get_color())
style['linewidth'] = line.get_linewidth()
style['dasharray'] = get_dasharray(line)
style['zorder'] = line.get_zorder()
return style
def get_marker_style(line):
"""Get the style dictionary for matplotlib marker objects"""
style = {}
style['alpha'] = line.get_alpha()
if style['alpha'] is None:
style['alpha'] = 1
style['facecolor'] = color_to_hex(line.get_markerfacecolor())
style['edgecolor'] = color_to_hex(line.get_markeredgecolor())
style['edgewidth'] = line.get_markeredgewidth()
style['marker'] = line.get_marker()
markerstyle = MarkerStyle(line.get_marker())
markersize = line.get_markersize()
markertransform = (markerstyle.get_transform()
+ Affine2D().scale(markersize, -markersize))
style['markerpath'] = SVG_path(markerstyle.get_path(),
markertransform)
style['markersize'] = markersize
style['zorder'] = line.get_zorder()
return style
def get_text_style(text):
"""Return the text style dict for a text instance"""
style = {}
style['alpha'] = text.get_alpha()
if style['alpha'] is None:
style['alpha'] = 1
style['fontsize'] = text.get_size()
style['color'] = color_to_hex(text.get_color())
style['halign'] = text.get_horizontalalignment() # left, center, right
style['valign'] = text.get_verticalalignment() # baseline, center, top
style['rotation'] = text.get_rotation()
style['zorder'] = text.get_zorder()
return style
def get_axis_properties(axis):
"""Return the property dictionary for a matplotlib.Axis instance"""
props = {}
label1On = axis._major_tick_kw.get('label1On', True)
if isinstance(axis, matplotlib.axis.XAxis):
if label1On:
props['position'] = "bottom"
else:
props['position'] = "top"
elif isinstance(axis, matplotlib.axis.YAxis):
if label1On:
props['position'] = "left"
else:
props['position'] = "right"
else:
raise ValueError("{0} should be an Axis instance".format(axis))
# Use tick values if appropriate
locator = axis.get_major_locator()
props['nticks'] = len(locator())
if isinstance(locator, ticker.FixedLocator):
props['tickvalues'] = list(locator())
else:
props['tickvalues'] = None
# Find tick formats
formatter = axis.get_major_formatter()
if isinstance(formatter, ticker.NullFormatter):
props['tickformat'] = ""
elif not any(label.get_visible() for label in axis.get_ticklabels()):
props['tickformat'] = ""
else:
props['tickformat'] = None
# Get axis scale
props['scale'] = axis.get_scale()
# Get major tick label size (assumes that's all we really care about!)
labels = axis.get_ticklabels()
if labels:
props['fontsize'] = labels[0].get_fontsize()
else:
props['fontsize'] = None
# Get associated grid
props['grid'] = get_grid_style(axis)
return props
def get_grid_style(axis):
gridlines = axis.get_gridlines()
if axis._gridOnMajor and len(gridlines) > 0:
color = color_to_hex(gridlines[0].get_color())
alpha = gridlines[0].get_alpha()
dasharray = get_dasharray(gridlines[0])
return dict(gridOn=True,
color=color,
dasharray=dasharray,
alpha=alpha)
else:
return {"gridOn":False}
def get_figure_properties(fig):
return {'figwidth': fig.get_figwidth(),
'figheight': fig.get_figheight(),
'dpi': fig.dpi}
def get_axes_properties(ax):
props = {'axesbg': color_to_hex(ax.patch.get_facecolor()),
'axesbgalpha': ax.patch.get_alpha(),
'bounds': ax.get_position().bounds,
'dynamic': ax.get_navigate(),
'axes': [get_axis_properties(ax.xaxis),
get_axis_properties(ax.yaxis)]}
for axname in ['x', 'y']:
axis = getattr(ax, axname + 'axis')
domain = getattr(ax, 'get_{0}lim'.format(axname))()
lim = domain
if isinstance(axis.converter, matplotlib.dates.DateConverter):
scale = 'date'
try:
import pandas as pd
from pandas.tseries.converter import PeriodConverter
except ImportError:
pd = None
if (pd is not None and isinstance(axis.converter,
PeriodConverter)):
_dates = [pd.Period(ordinal=int(d), freq=axis.freq)
for d in domain]
domain = [(d.year, d.month - 1, d.day,
d.hour, d.minute, d.second, 0)
for d in _dates]
else:
domain = [(d.year, d.month - 1, d.day,
d.hour, d.minute, d.second,
d.microsecond * 1E-3)
for d in matplotlib.dates.num2date(domain)]
else:
scale = axis.get_scale()
if scale not in ['date', 'linear', 'log']:
raise ValueError("Unknown axis scale: "
"{0}".format(axis[axname].get_scale()))
props[axname + 'scale'] = scale
props[axname + 'lim'] = lim
props[axname + 'domain'] = domain
return props
def iter_all_children(obj, skipContainers=False):
"""
Returns an iterator over all childen and nested children using
obj's get_children() method
if skipContainers is true, only childless objects are returned.
"""
if hasattr(obj, 'get_children') and len(obj.get_children()) > 0:
for child in obj.get_children():
if not skipContainers:
yield child
# could use `yield from` in python 3...
for grandchild in iter_all_children(child, skipContainers):
yield grandchild
else:
yield obj
def get_legend_properties(ax, legend):
handles, labels = ax.get_legend_handles_labels()
visible = legend.get_visible()
return {'handles': handles, 'labels': labels, 'visible': visible}
def image_to_base64(image):
"""
Convert a matplotlib image to a base64 png representation
Parameters
----------
image : matplotlib image object
The image to be converted.
Returns
-------
image_base64 : string
The UTF8-encoded base64 string representation of the png image.
"""
ax = image.axes
binary_buffer = io.BytesIO()
# image is saved in axes coordinates: we need to temporarily
# set the correct limits to get the correct image
lim = ax.axis()
ax.axis(image.get_extent())
image.write_png(binary_buffer)
ax.axis(lim)
binary_buffer.seek(0)
return base64.b64encode(binary_buffer.read()).decode('utf-8')
| bsd-3-clause |
gfyoung/pandas | pandas/tests/indexes/datetimes/test_pickle.py | 4 | 1342 | import pytest
from pandas import NaT, date_range, to_datetime
import pandas._testing as tm
class TestPickle:
def test_pickle(self):
# GH#4606
idx = to_datetime(["2013-01-01", NaT, "2014-01-06"])
idx_p = tm.round_trip_pickle(idx)
assert idx_p[0] == idx[0]
assert idx_p[1] is NaT
assert idx_p[2] == idx[2]
def test_pickle_dont_infer_freq(self):
# GH##11002
# don't infer freq
idx = date_range("1750-1-1", "2050-1-1", freq="7D")
idx_p = tm.round_trip_pickle(idx)
tm.assert_index_equal(idx, idx_p)
def test_pickle_after_set_freq(self):
dti = date_range("20130101", periods=3, tz="US/Eastern", name="foo")
dti = dti._with_freq(None)
res = tm.round_trip_pickle(dti)
tm.assert_index_equal(res, dti)
def test_roundtrip_pickle_with_tz(self):
# GH#8367
# round-trip of timezone
index = date_range("20130101", periods=3, tz="US/Eastern", name="foo")
unpickled = tm.round_trip_pickle(index)
tm.assert_index_equal(index, unpickled)
@pytest.mark.parametrize("freq", ["B", "C"])
def test_pickle_unpickle(self, freq):
rng = date_range("2009-01-01", "2010-01-01", freq=freq)
unpickled = tm.round_trip_pickle(rng)
assert unpickled.freq == freq
| bsd-3-clause |
schreiberx/sweet | benchmarks_sphere/paper_jrn_sl_exp/test_compare_wt_dt_vs_accuracy_galewsky_M256_6hours_l_n_vd/postprocessing_consolidate_prog_vrt.py | 8 | 6177 | #! /usr/bin/env python3
import sys
import math
from mule.plotting.Plotting import *
from mule.postprocessing.JobsData import *
from mule.postprocessing.JobsDataConsolidate import *
sys.path.append('../')
import pretty_plotting as pp
sys.path.pop()
mule_plotting_usetex(False)
groups = ['runtime.timestepping_method']
tagnames_y = [
'sphere_data_diff_prog_vrt.res_norm_l1',
'sphere_data_diff_prog_vrt.res_norm_l2',
'sphere_data_diff_prog_vrt.res_norm_linf',
]
j = JobsData('./job_bench_*', verbosity=0)
c = JobsDataConsolidate(j)
print("")
print("Groups:")
job_groups = c.create_groups(groups)
for key, g in job_groups.items():
print(key)
for tagname_y in tagnames_y:
params = []
params += [
{
'tagname_x': 'runtime.timestep_size',
'xlabel': "Timestep size (seconds)",
'ylabel': pp.latex_pretty_names[tagname_y],
'title': 'Timestep size vs. error',
'xscale': 'log',
'yscale': 'log',
'convergence': True,
},
]
params += [
{
'tagname_x': 'output.simulation_benchmark_timings.main_timestepping',
'xlabel': "Wallclock time (seconds)",
'ylabel': pp.latex_pretty_names[tagname_y],
'title': 'Wallclock time vs. error',
'xscale': 'log',
'yscale': 'log',
'convergence': False,
},
]
for param in params:
tagname_x = param['tagname_x']
xlabel = param['xlabel']
ylabel = param['ylabel']
title = param['title']
xscale = param['xscale']
yscale = param['yscale']
convergence = param['convergence']
print("*"*80)
print("Processing tag "+tagname_x)
print("*"*80)
if True:
"""
Plotting format
"""
# Filter out errors beyond this value!
def data_filter(x, y, jobdata):
if y == None:
return True
x = float(x)
y = float(y)
if math.isnan(y):
return True
if 'l1' in tagname_y:
if y > 1e1:
print("Sorting out L1 data "+str(y))
return True
elif 'l2' in tagname_y:
if y > 1e1:
print("Sorting out L2 data "+str(y))
return True
elif 'linf' in tagname_y:
if y > 1e2:
print("Sorting out Linf data "+str(y))
return True
else:
raise Exception("Unknown y tag "+tagname_y)
return False
d = JobsData_GroupsPlottingScattered(
job_groups,
tagname_x,
tagname_y,
data_filter = data_filter
)
fileid = "output_plotting_"+tagname_x.replace('.', '-').replace('_', '-')+"_vs_"+tagname_y.replace('.', '-').replace('_', '-')
if True:
#
# Proper naming and sorting of each label
#
# new data dictionary
data_new = {}
for key, data in d.data.items():
# generate nice tex label
#data['label'] = pp.get_pretty_name(key)
data['label'] = key #pp.get_pretty_name(key)
key_new = pp.get_pretty_name_order(key)+'_'+key
# copy data
data_new[key_new] = copy.copy(data)
# Copy back new data table
d.data = data_new
p = Plotting_ScatteredData()
def fun(p):
from matplotlib import ticker
from matplotlib.ticker import FormatStrFormatter
plt.tick_params(axis='x', which='minor')
p.ax.xaxis.set_minor_formatter(FormatStrFormatter("%.0f"))
p.ax.xaxis.set_major_formatter(FormatStrFormatter("%.0f"))
p.ax.xaxis.set_minor_locator(ticker.LogLocator(subs=[1.5, 2.0, 3.0, 5.0]))
for tick in p.ax.xaxis.get_minor_ticks():
tick.label.set_fontsize(8)
plt.tick_params(axis='y', which='minor')
p.ax.yaxis.set_minor_formatter(FormatStrFormatter("%.1e"))
p.ax.yaxis.set_major_formatter(FormatStrFormatter("%.1e"))
p.ax.yaxis.set_minor_locator(ticker.LogLocator(subs=[1.5, 2.0, 3.0, 5.0]))
for tick in p.ax.yaxis.get_minor_ticks():
tick.label.set_fontsize(6)
#
# Add convergence information
#
if convergence:
if 'l1' in tagname_y:
ps = [100, 1e-9]
elif 'l2' in tagname_y:
ps = [100, 5e-8]
elif 'linf' in tagname_y:
ps = [100, 1e-7]
else:
ps = [100, 1e-0]
p.add_convergence(2, ps)
annotate_text_template = "{:.1f} / {:.3f}"
p.plot(
data_plotting = d.get_data_float(),
xlabel = xlabel,
ylabel = ylabel,
title = title,
xscale = xscale,
yscale = yscale,
#annotate = True,
#annotate_each_nth_value = 3,
#annotate_fontsize = 6,
#annotate_text_template = annotate_text_template,
legend_fontsize = 8,
grid = True,
outfile = fileid+".pdf",
lambda_fun = fun,
)
print("Data plotting:")
d.print()
d.write(fileid+".csv")
print("Info:")
print(" NaN: Errors in simulations")
print(" None: No data available")
| mit |
xubenben/scikit-learn | examples/bicluster/plot_spectral_biclustering.py | 403 | 2011 | """
=============================================
A demo of the Spectral Biclustering algorithm
=============================================
This example demonstrates how to generate a checkerboard dataset and
bicluster it using the Spectral Biclustering algorithm.
The data is generated with the ``make_checkerboard`` function, then
shuffled and passed to the Spectral Biclustering algorithm. The rows
and columns of the shuffled matrix are rearranged to show the
biclusters found by the algorithm.
The outer product of the row and column label vectors shows a
representation of the checkerboard structure.
"""
print(__doc__)
# Author: Kemal Eren <[email protected]>
# License: BSD 3 clause
import numpy as np
from matplotlib import pyplot as plt
from sklearn.datasets import make_checkerboard
from sklearn.datasets import samples_generator as sg
from sklearn.cluster.bicluster import SpectralBiclustering
from sklearn.metrics import consensus_score
n_clusters = (4, 3)
data, rows, columns = make_checkerboard(
shape=(300, 300), n_clusters=n_clusters, noise=10,
shuffle=False, random_state=0)
plt.matshow(data, cmap=plt.cm.Blues)
plt.title("Original dataset")
data, row_idx, col_idx = sg._shuffle(data, random_state=0)
plt.matshow(data, cmap=plt.cm.Blues)
plt.title("Shuffled dataset")
model = SpectralBiclustering(n_clusters=n_clusters, method='log',
random_state=0)
model.fit(data)
score = consensus_score(model.biclusters_,
(rows[:, row_idx], columns[:, col_idx]))
print("consensus score: {:.1f}".format(score))
fit_data = data[np.argsort(model.row_labels_)]
fit_data = fit_data[:, np.argsort(model.column_labels_)]
plt.matshow(fit_data, cmap=plt.cm.Blues)
plt.title("After biclustering; rearranged to show biclusters")
plt.matshow(np.outer(np.sort(model.row_labels_) + 1,
np.sort(model.column_labels_) + 1),
cmap=plt.cm.Blues)
plt.title("Checkerboard structure of rearranged data")
plt.show()
| bsd-3-clause |
wwf5067/statsmodels | statsmodels/examples/ex_kernel_test_functional.py | 34 | 2246 | # -*- coding: utf-8 -*-
"""
Created on Tue Jan 08 19:03:20 2013
Author: Josef Perktold
"""
from __future__ import print_function
if __name__ == '__main__':
import numpy as np
from statsmodels.regression.linear_model import OLS
#from statsmodels.nonparametric.api import KernelReg
import statsmodels.sandbox.nonparametric.kernel_extras as smke
seed = np.random.randint(999999)
#seed = 661176
print(seed)
np.random.seed(seed)
sig_e = 0.5 #0.1
nobs, k_vars = 200, 1
x = np.random.uniform(-2, 2, size=(nobs, k_vars))
x.sort()
order = 3
exog = x**np.arange(order + 1)
beta = np.array([1, 1, 0.1, 0.0])[:order+1] # 1. / np.arange(1, order + 2)
y_true = np.dot(exog, beta)
y = y_true + sig_e * np.random.normal(size=nobs)
endog = y
print('DGP')
print('nobs=%d, beta=%r, sig_e=%3.1f' % (nobs, beta, sig_e))
mod_ols = OLS(endog, exog[:,:2])
res_ols = mod_ols.fit()
#'cv_ls'[1000, 0.5][0.01, 0.45]
tst = smke.TestFForm(endog, exog[:,:2], bw=[0.01, 0.45], var_type='cc',
fform=lambda x,p: mod_ols.predict(p,x),
estimator=lambda y,x: OLS(y,x).fit().params,
nboot=1000)
print('bw', tst.bw)
print('tst.test_stat', tst.test_stat)
print(tst.sig)
print('tst.boots_results mean, min, max', (tst.boots_results.mean(),
tst.boots_results.min(),
tst.boots_results.max()))
print('lower tail bootstrap p-value', (tst.boots_results < tst.test_stat).mean())
print('upper tail bootstrap p-value', (tst.boots_results >= tst.test_stat).mean())
from scipy import stats
print('aymp.normal p-value (2-sided)', stats.norm.sf(np.abs(tst.test_stat))*2)
print('aymp.normal p-value (upper)', stats.norm.sf(tst.test_stat))
do_plot=True
if do_plot:
import matplotlib.pyplot as plt
plt.figure()
plt.plot(x, y, '.')
plt.plot(x, res_ols.fittedvalues)
plt.title('OLS fit')
plt.figure()
plt.hist(tst.boots_results.ravel(), bins=20)
plt.title('bootstrap histogram or test statistic')
plt.show()
| bsd-3-clause |
google-research/dreamer | setup.py | 1 | 1488 | # Copyright 2019 The Dreamer Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import setuptools
setuptools.setup(
name='dreamer',
version='1.0.0',
description=(
'Dream to Explore: Learning Behaviors by Latent Imagination'),
license='Apache 2.0',
url='http://github.com/google-research/dreamer',
install_requires=[
'dm_control',
'gym',
'matplotlib',
'ruamel.yaml',
'scikit-image',
'scipy',
'tensorflow-gpu==1.13.1',
'tensorflow_probability==0.6.0',
],
packages=setuptools.find_packages(),
classifiers=[
'Programming Language :: Python :: 3',
'License :: OSI Approved :: Apache Software License',
'Topic :: Scientific/Engineering :: Artificial Intelligence',
'Intended Audience :: Science/Research',
],
)
| apache-2.0 |
jreback/pandas | pandas/tests/window/test_apply.py | 2 | 4880 | import numpy as np
import pytest
from pandas import DataFrame, Index, MultiIndex, Series, Timestamp, date_range
import pandas._testing as tm
@pytest.mark.parametrize("bad_raw", [None, 1, 0])
def test_rolling_apply_invalid_raw(bad_raw):
with pytest.raises(ValueError, match="raw parameter must be `True` or `False`"):
Series(range(3)).rolling(1).apply(len, raw=bad_raw)
def test_rolling_apply_out_of_bounds(engine_and_raw):
# gh-1850
engine, raw = engine_and_raw
vals = Series([1, 2, 3, 4])
result = vals.rolling(10).apply(np.sum, engine=engine, raw=raw)
assert result.isna().all()
result = vals.rolling(10, min_periods=1).apply(np.sum, engine=engine, raw=raw)
expected = Series([1, 3, 6, 10], dtype=float)
tm.assert_almost_equal(result, expected)
@pytest.mark.parametrize("window", [2, "2s"])
def test_rolling_apply_with_pandas_objects(window):
# 5071
df = DataFrame(
{"A": np.random.randn(5), "B": np.random.randint(0, 10, size=5)},
index=date_range("20130101", periods=5, freq="s"),
)
# we have an equal spaced timeseries index
# so simulate removing the first period
def f(x):
if x.index[0] == df.index[0]:
return np.nan
return x.iloc[-1]
result = df.rolling(window).apply(f, raw=False)
expected = df.iloc[2:].reindex_like(df)
tm.assert_frame_equal(result, expected)
with tm.external_error_raised(AttributeError):
df.rolling(window).apply(f, raw=True)
def test_rolling_apply(engine_and_raw):
engine, raw = engine_and_raw
expected = Series([], dtype="float64")
result = expected.rolling(10).apply(lambda x: x.mean(), engine=engine, raw=raw)
tm.assert_series_equal(result, expected)
# gh-8080
s = Series([None, None, None])
result = s.rolling(2, min_periods=0).apply(lambda x: len(x), engine=engine, raw=raw)
expected = Series([1.0, 2.0, 2.0])
tm.assert_series_equal(result, expected)
result = s.rolling(2, min_periods=0).apply(len, engine=engine, raw=raw)
tm.assert_series_equal(result, expected)
def test_all_apply(engine_and_raw):
engine, raw = engine_and_raw
df = (
DataFrame(
{"A": date_range("20130101", periods=5, freq="s"), "B": range(5)}
).set_index("A")
* 2
)
er = df.rolling(window=1)
r = df.rolling(window="1s")
result = r.apply(lambda x: 1, engine=engine, raw=raw)
expected = er.apply(lambda x: 1, engine=engine, raw=raw)
tm.assert_frame_equal(result, expected)
def test_ragged_apply(engine_and_raw):
engine, raw = engine_and_raw
df = DataFrame({"B": range(5)})
df.index = [
Timestamp("20130101 09:00:00"),
Timestamp("20130101 09:00:02"),
Timestamp("20130101 09:00:03"),
Timestamp("20130101 09:00:05"),
Timestamp("20130101 09:00:06"),
]
f = lambda x: 1
result = df.rolling(window="1s", min_periods=1).apply(f, engine=engine, raw=raw)
expected = df.copy()
expected["B"] = 1.0
tm.assert_frame_equal(result, expected)
result = df.rolling(window="2s", min_periods=1).apply(f, engine=engine, raw=raw)
expected = df.copy()
expected["B"] = 1.0
tm.assert_frame_equal(result, expected)
result = df.rolling(window="5s", min_periods=1).apply(f, engine=engine, raw=raw)
expected = df.copy()
expected["B"] = 1.0
tm.assert_frame_equal(result, expected)
def test_invalid_engine():
with pytest.raises(ValueError, match="engine must be either 'numba' or 'cython'"):
Series(range(1)).rolling(1).apply(lambda x: x, engine="foo")
def test_invalid_engine_kwargs_cython():
with pytest.raises(ValueError, match="cython engine does not accept engine_kwargs"):
Series(range(1)).rolling(1).apply(
lambda x: x, engine="cython", engine_kwargs={"nopython": False}
)
def test_invalid_raw_numba():
with pytest.raises(
ValueError, match="raw must be `True` when using the numba engine"
):
Series(range(1)).rolling(1).apply(lambda x: x, raw=False, engine="numba")
@pytest.mark.parametrize("args_kwargs", [[None, {"par": 10}], [(10,), None]])
def test_rolling_apply_args_kwargs(args_kwargs):
# GH 33433
def foo(x, par):
return np.sum(x + par)
df = DataFrame({"gr": [1, 1], "a": [1, 2]})
idx = Index(["gr", "a"])
expected = DataFrame([[11.0, 11.0], [11.0, 12.0]], columns=idx)
result = df.rolling(1).apply(foo, args=args_kwargs[0], kwargs=args_kwargs[1])
tm.assert_frame_equal(result, expected)
midx = MultiIndex.from_tuples([(1, 0), (1, 1)], names=["gr", None])
expected = Series([11.0, 12.0], index=midx, name="a")
gb_rolling = df.groupby("gr")["a"].rolling(1)
result = gb_rolling.apply(foo, args=args_kwargs[0], kwargs=args_kwargs[1])
tm.assert_series_equal(result, expected)
| bsd-3-clause |
DSLituiev/scikit-learn | sklearn/manifold/tests/test_isomap.py | 121 | 4301 | from itertools import product
import numpy as np
from numpy.testing import (assert_almost_equal, assert_array_almost_equal,
assert_equal)
from sklearn import datasets
from sklearn import manifold
from sklearn import neighbors
from sklearn import pipeline
from sklearn import preprocessing
from sklearn.utils.testing import assert_less
eigen_solvers = ['auto', 'dense', 'arpack']
path_methods = ['auto', 'FW', 'D']
def test_isomap_simple_grid():
# Isomap should preserve distances when all neighbors are used
N_per_side = 5
Npts = N_per_side ** 2
n_neighbors = Npts - 1
# grid of equidistant points in 2D, n_components = n_dim
X = np.array(list(product(range(N_per_side), repeat=2)))
# distances from each point to all others
G = neighbors.kneighbors_graph(X, n_neighbors,
mode='distance').toarray()
for eigen_solver in eigen_solvers:
for path_method in path_methods:
clf = manifold.Isomap(n_neighbors=n_neighbors, n_components=2,
eigen_solver=eigen_solver,
path_method=path_method)
clf.fit(X)
G_iso = neighbors.kneighbors_graph(clf.embedding_,
n_neighbors,
mode='distance').toarray()
assert_array_almost_equal(G, G_iso)
def test_isomap_reconstruction_error():
# Same setup as in test_isomap_simple_grid, with an added dimension
N_per_side = 5
Npts = N_per_side ** 2
n_neighbors = Npts - 1
# grid of equidistant points in 2D, n_components = n_dim
X = np.array(list(product(range(N_per_side), repeat=2)))
# add noise in a third dimension
rng = np.random.RandomState(0)
noise = 0.1 * rng.randn(Npts, 1)
X = np.concatenate((X, noise), 1)
# compute input kernel
G = neighbors.kneighbors_graph(X, n_neighbors,
mode='distance').toarray()
centerer = preprocessing.KernelCenterer()
K = centerer.fit_transform(-0.5 * G ** 2)
for eigen_solver in eigen_solvers:
for path_method in path_methods:
clf = manifold.Isomap(n_neighbors=n_neighbors, n_components=2,
eigen_solver=eigen_solver,
path_method=path_method)
clf.fit(X)
# compute output kernel
G_iso = neighbors.kneighbors_graph(clf.embedding_,
n_neighbors,
mode='distance').toarray()
K_iso = centerer.fit_transform(-0.5 * G_iso ** 2)
# make sure error agrees
reconstruction_error = np.linalg.norm(K - K_iso) / Npts
assert_almost_equal(reconstruction_error,
clf.reconstruction_error())
def test_transform():
n_samples = 200
n_components = 10
noise_scale = 0.01
# Create S-curve dataset
X, y = datasets.samples_generator.make_s_curve(n_samples, random_state=0)
# Compute isomap embedding
iso = manifold.Isomap(n_components, 2)
X_iso = iso.fit_transform(X)
# Re-embed a noisy version of the points
rng = np.random.RandomState(0)
noise = noise_scale * rng.randn(*X.shape)
X_iso2 = iso.transform(X + noise)
# Make sure the rms error on re-embedding is comparable to noise_scale
assert_less(np.sqrt(np.mean((X_iso - X_iso2) ** 2)), 2 * noise_scale)
def test_pipeline():
# check that Isomap works fine as a transformer in a Pipeline
# only checks that no error is raised.
# TODO check that it actually does something useful
X, y = datasets.make_blobs(random_state=0)
clf = pipeline.Pipeline(
[('isomap', manifold.Isomap()),
('clf', neighbors.KNeighborsClassifier())])
clf.fit(X, y)
assert_less(.9, clf.score(X, y))
def test_isomap_clone_bug():
# regression test for bug reported in #6062
model = manifold.Isomap()
for n_neighbors in [10, 15, 20]:
model.set_params(n_neighbors=n_neighbors)
model.fit(np.random.rand(50, 2))
assert_equal(model.nbrs_.n_neighbors,
n_neighbors)
| bsd-3-clause |
weld-project/weld | examples/python/grizzly/movielens.py | 3 | 1438 | import pandas as pd
import time
# Make display smaller
pd.options.display.max_rows = 10
unames = ['user_id', 'gender', 'age', 'occupation', 'zip']
users = pd.read_table('data/ml-1m/users.dat', sep='::', header=None,
names=unames)
rnames = ['user_id', 'movie_id', 'rating', 'timestamp']
ratings = pd.read_table('data/ml-1m/ratings.dat', sep='::', header=None,
names=rnames)
mnames = ['movie_id', 'title', 'genres']
movies = pd.read_table('data/ml-1m/movies.dat', sep='::', header=None,
names=mnames)
start = time.time()
data = pd.merge(pd.merge(ratings, users), movies)
end = time.time()
print data
start1 = time.time()
mean_ratings = data.pivot_table('rating', index='title', columns='gender',
aggfunc='mean')
ratings_by_title = data.groupby('title').size()
active_titles = ratings_by_title.index[ratings_by_title >= 250]
mean_ratings = mean_ratings.loc[active_titles]
mean_ratings['diff'] = mean_ratings['M'] - mean_ratings['F']
sorted_by_diff = mean_ratings.sort_values(by='diff')
rating_std_by_title = data.groupby('title')['rating'].std()
rating_std_by_title = rating_std_by_title.loc[active_titles]
rating_std_by_title = rating_std_by_title.sort_values(ascending=False)[:10]
end1 = time.time()
print sorted_by_diff
print rating_std_by_title
print "Time to merge:", (end - start)
print "Time for analysis:", (end1 - start1)
| bsd-3-clause |
zhenv5/scikit-learn | examples/applications/plot_out_of_core_classification.py | 255 | 13919 | """
======================================================
Out-of-core classification of text documents
======================================================
This is an example showing how scikit-learn can be used for classification
using an out-of-core approach: learning from data that doesn't fit into main
memory. We make use of an online classifier, i.e., one that supports the
partial_fit method, that will be fed with batches of examples. To guarantee
that the features space remains the same over time we leverage a
HashingVectorizer that will project each example into the same feature space.
This is especially useful in the case of text classification where new
features (words) may appear in each batch.
The dataset used in this example is Reuters-21578 as provided by the UCI ML
repository. It will be automatically downloaded and uncompressed on first run.
The plot represents the learning curve of the classifier: the evolution
of classification accuracy over the course of the mini-batches. Accuracy is
measured on the first 1000 samples, held out as a validation set.
To limit the memory consumption, we queue examples up to a fixed amount before
feeding them to the learner.
"""
# Authors: Eustache Diemert <[email protected]>
# @FedericoV <https://github.com/FedericoV/>
# License: BSD 3 clause
from __future__ import print_function
from glob import glob
import itertools
import os.path
import re
import tarfile
import time
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import rcParams
from sklearn.externals.six.moves import html_parser
from sklearn.externals.six.moves import urllib
from sklearn.datasets import get_data_home
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
from sklearn.linear_model import PassiveAggressiveClassifier
from sklearn.linear_model import Perceptron
from sklearn.naive_bayes import MultinomialNB
def _not_in_sphinx():
# Hack to detect whether we are running by the sphinx builder
return '__file__' in globals()
###############################################################################
# Reuters Dataset related routines
###############################################################################
class ReutersParser(html_parser.HTMLParser):
"""Utility class to parse a SGML file and yield documents one at a time."""
def __init__(self, encoding='latin-1'):
html_parser.HTMLParser.__init__(self)
self._reset()
self.encoding = encoding
def handle_starttag(self, tag, attrs):
method = 'start_' + tag
getattr(self, method, lambda x: None)(attrs)
def handle_endtag(self, tag):
method = 'end_' + tag
getattr(self, method, lambda: None)()
def _reset(self):
self.in_title = 0
self.in_body = 0
self.in_topics = 0
self.in_topic_d = 0
self.title = ""
self.body = ""
self.topics = []
self.topic_d = ""
def parse(self, fd):
self.docs = []
for chunk in fd:
self.feed(chunk.decode(self.encoding))
for doc in self.docs:
yield doc
self.docs = []
self.close()
def handle_data(self, data):
if self.in_body:
self.body += data
elif self.in_title:
self.title += data
elif self.in_topic_d:
self.topic_d += data
def start_reuters(self, attributes):
pass
def end_reuters(self):
self.body = re.sub(r'\s+', r' ', self.body)
self.docs.append({'title': self.title,
'body': self.body,
'topics': self.topics})
self._reset()
def start_title(self, attributes):
self.in_title = 1
def end_title(self):
self.in_title = 0
def start_body(self, attributes):
self.in_body = 1
def end_body(self):
self.in_body = 0
def start_topics(self, attributes):
self.in_topics = 1
def end_topics(self):
self.in_topics = 0
def start_d(self, attributes):
self.in_topic_d = 1
def end_d(self):
self.in_topic_d = 0
self.topics.append(self.topic_d)
self.topic_d = ""
def stream_reuters_documents(data_path=None):
"""Iterate over documents of the Reuters dataset.
The Reuters archive will automatically be downloaded and uncompressed if
the `data_path` directory does not exist.
Documents are represented as dictionaries with 'body' (str),
'title' (str), 'topics' (list(str)) keys.
"""
DOWNLOAD_URL = ('http://archive.ics.uci.edu/ml/machine-learning-databases/'
'reuters21578-mld/reuters21578.tar.gz')
ARCHIVE_FILENAME = 'reuters21578.tar.gz'
if data_path is None:
data_path = os.path.join(get_data_home(), "reuters")
if not os.path.exists(data_path):
"""Download the dataset."""
print("downloading dataset (once and for all) into %s" %
data_path)
os.mkdir(data_path)
def progress(blocknum, bs, size):
total_sz_mb = '%.2f MB' % (size / 1e6)
current_sz_mb = '%.2f MB' % ((blocknum * bs) / 1e6)
if _not_in_sphinx():
print('\rdownloaded %s / %s' % (current_sz_mb, total_sz_mb),
end='')
archive_path = os.path.join(data_path, ARCHIVE_FILENAME)
urllib.request.urlretrieve(DOWNLOAD_URL, filename=archive_path,
reporthook=progress)
if _not_in_sphinx():
print('\r', end='')
print("untarring Reuters dataset...")
tarfile.open(archive_path, 'r:gz').extractall(data_path)
print("done.")
parser = ReutersParser()
for filename in glob(os.path.join(data_path, "*.sgm")):
for doc in parser.parse(open(filename, 'rb')):
yield doc
###############################################################################
# Main
###############################################################################
# Create the vectorizer and limit the number of features to a reasonable
# maximum
vectorizer = HashingVectorizer(decode_error='ignore', n_features=2 ** 18,
non_negative=True)
# Iterator over parsed Reuters SGML files.
data_stream = stream_reuters_documents()
# We learn a binary classification between the "acq" class and all the others.
# "acq" was chosen as it is more or less evenly distributed in the Reuters
# files. For other datasets, one should take care of creating a test set with
# a realistic portion of positive instances.
all_classes = np.array([0, 1])
positive_class = 'acq'
# Here are some classifiers that support the `partial_fit` method
partial_fit_classifiers = {
'SGD': SGDClassifier(),
'Perceptron': Perceptron(),
'NB Multinomial': MultinomialNB(alpha=0.01),
'Passive-Aggressive': PassiveAggressiveClassifier(),
}
def get_minibatch(doc_iter, size, pos_class=positive_class):
"""Extract a minibatch of examples, return a tuple X_text, y.
Note: size is before excluding invalid docs with no topics assigned.
"""
data = [(u'{title}\n\n{body}'.format(**doc), pos_class in doc['topics'])
for doc in itertools.islice(doc_iter, size)
if doc['topics']]
if not len(data):
return np.asarray([], dtype=int), np.asarray([], dtype=int)
X_text, y = zip(*data)
return X_text, np.asarray(y, dtype=int)
def iter_minibatches(doc_iter, minibatch_size):
"""Generator of minibatches."""
X_text, y = get_minibatch(doc_iter, minibatch_size)
while len(X_text):
yield X_text, y
X_text, y = get_minibatch(doc_iter, minibatch_size)
# test data statistics
test_stats = {'n_test': 0, 'n_test_pos': 0}
# First we hold out a number of examples to estimate accuracy
n_test_documents = 1000
tick = time.time()
X_test_text, y_test = get_minibatch(data_stream, 1000)
parsing_time = time.time() - tick
tick = time.time()
X_test = vectorizer.transform(X_test_text)
vectorizing_time = time.time() - tick
test_stats['n_test'] += len(y_test)
test_stats['n_test_pos'] += sum(y_test)
print("Test set is %d documents (%d positive)" % (len(y_test), sum(y_test)))
def progress(cls_name, stats):
"""Report progress information, return a string."""
duration = time.time() - stats['t0']
s = "%20s classifier : \t" % cls_name
s += "%(n_train)6d train docs (%(n_train_pos)6d positive) " % stats
s += "%(n_test)6d test docs (%(n_test_pos)6d positive) " % test_stats
s += "accuracy: %(accuracy).3f " % stats
s += "in %.2fs (%5d docs/s)" % (duration, stats['n_train'] / duration)
return s
cls_stats = {}
for cls_name in partial_fit_classifiers:
stats = {'n_train': 0, 'n_train_pos': 0,
'accuracy': 0.0, 'accuracy_history': [(0, 0)], 't0': time.time(),
'runtime_history': [(0, 0)], 'total_fit_time': 0.0}
cls_stats[cls_name] = stats
get_minibatch(data_stream, n_test_documents)
# Discard test set
# We will feed the classifier with mini-batches of 1000 documents; this means
# we have at most 1000 docs in memory at any time. The smaller the document
# batch, the bigger the relative overhead of the partial fit methods.
minibatch_size = 1000
# Create the data_stream that parses Reuters SGML files and iterates on
# documents as a stream.
minibatch_iterators = iter_minibatches(data_stream, minibatch_size)
total_vect_time = 0.0
# Main loop : iterate on mini-batchs of examples
for i, (X_train_text, y_train) in enumerate(minibatch_iterators):
tick = time.time()
X_train = vectorizer.transform(X_train_text)
total_vect_time += time.time() - tick
for cls_name, cls in partial_fit_classifiers.items():
tick = time.time()
# update estimator with examples in the current mini-batch
cls.partial_fit(X_train, y_train, classes=all_classes)
# accumulate test accuracy stats
cls_stats[cls_name]['total_fit_time'] += time.time() - tick
cls_stats[cls_name]['n_train'] += X_train.shape[0]
cls_stats[cls_name]['n_train_pos'] += sum(y_train)
tick = time.time()
cls_stats[cls_name]['accuracy'] = cls.score(X_test, y_test)
cls_stats[cls_name]['prediction_time'] = time.time() - tick
acc_history = (cls_stats[cls_name]['accuracy'],
cls_stats[cls_name]['n_train'])
cls_stats[cls_name]['accuracy_history'].append(acc_history)
run_history = (cls_stats[cls_name]['accuracy'],
total_vect_time + cls_stats[cls_name]['total_fit_time'])
cls_stats[cls_name]['runtime_history'].append(run_history)
if i % 3 == 0:
print(progress(cls_name, cls_stats[cls_name]))
if i % 3 == 0:
print('\n')
###############################################################################
# Plot results
###############################################################################
def plot_accuracy(x, y, x_legend):
"""Plot accuracy as a function of x."""
x = np.array(x)
y = np.array(y)
plt.title('Classification accuracy as a function of %s' % x_legend)
plt.xlabel('%s' % x_legend)
plt.ylabel('Accuracy')
plt.grid(True)
plt.plot(x, y)
rcParams['legend.fontsize'] = 10
cls_names = list(sorted(cls_stats.keys()))
# Plot accuracy evolution
plt.figure()
for _, stats in sorted(cls_stats.items()):
# Plot accuracy evolution with #examples
accuracy, n_examples = zip(*stats['accuracy_history'])
plot_accuracy(n_examples, accuracy, "training examples (#)")
ax = plt.gca()
ax.set_ylim((0.8, 1))
plt.legend(cls_names, loc='best')
plt.figure()
for _, stats in sorted(cls_stats.items()):
# Plot accuracy evolution with runtime
accuracy, runtime = zip(*stats['runtime_history'])
plot_accuracy(runtime, accuracy, 'runtime (s)')
ax = plt.gca()
ax.set_ylim((0.8, 1))
plt.legend(cls_names, loc='best')
# Plot fitting times
plt.figure()
fig = plt.gcf()
cls_runtime = []
for cls_name, stats in sorted(cls_stats.items()):
cls_runtime.append(stats['total_fit_time'])
cls_runtime.append(total_vect_time)
cls_names.append('Vectorization')
bar_colors = rcParams['axes.color_cycle'][:len(cls_names)]
ax = plt.subplot(111)
rectangles = plt.bar(range(len(cls_names)), cls_runtime, width=0.5,
color=bar_colors)
ax.set_xticks(np.linspace(0.25, len(cls_names) - 0.75, len(cls_names)))
ax.set_xticklabels(cls_names, fontsize=10)
ymax = max(cls_runtime) * 1.2
ax.set_ylim((0, ymax))
ax.set_ylabel('runtime (s)')
ax.set_title('Training Times')
def autolabel(rectangles):
"""attach some text vi autolabel on rectangles."""
for rect in rectangles:
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width() / 2.,
1.05 * height, '%.4f' % height,
ha='center', va='bottom')
autolabel(rectangles)
plt.show()
# Plot prediction times
plt.figure()
#fig = plt.gcf()
cls_runtime = []
cls_names = list(sorted(cls_stats.keys()))
for cls_name, stats in sorted(cls_stats.items()):
cls_runtime.append(stats['prediction_time'])
cls_runtime.append(parsing_time)
cls_names.append('Read/Parse\n+Feat.Extr.')
cls_runtime.append(vectorizing_time)
cls_names.append('Hashing\n+Vect.')
bar_colors = rcParams['axes.color_cycle'][:len(cls_names)]
ax = plt.subplot(111)
rectangles = plt.bar(range(len(cls_names)), cls_runtime, width=0.5,
color=bar_colors)
ax.set_xticks(np.linspace(0.25, len(cls_names) - 0.75, len(cls_names)))
ax.set_xticklabels(cls_names, fontsize=8)
plt.setp(plt.xticks()[1], rotation=30)
ymax = max(cls_runtime) * 1.2
ax.set_ylim((0, ymax))
ax.set_ylabel('runtime (s)')
ax.set_title('Prediction Times (%d instances)' % n_test_documents)
autolabel(rectangles)
plt.show()
| bsd-3-clause |
bbengfort/cloudscope | cloudscope/viz.py | 1 | 10416 | # cloudscope.viz
# Helper functions for creating output vizualiations from simulations.
#
# Author: Benjamin Bengfort <[email protected]>
# Created: Fri Dec 04 13:49:54 2015 -0500
#
# Copyright (C) 2015 University of Maryland
# For license information, see LICENSE.txt
#
# ID: viz.py [d0f0ca1] [email protected] $
"""
Helper functions for creating output vizualiations from simulations.
"""
##########################################################################
## Imports
##########################################################################
from operator import itemgetter
from collections import defaultdict
from cloudscope.config import settings
from peak.util.imports import lazyModule
from cloudscope.colors import ColorMap
from networkx.readwrite import json_graph
# Perform lazy loading of vizualiation libraries
nx = lazyModule('networkx')
gt = lazyModule('graph_tool.all')
sns = lazyModule('seaborn')
plt = lazyModule('matplotlib.pyplot')
np = lazyModule('numpy')
pd = lazyModule('pandas')
##########################################################################
## Helper Functions
##########################################################################
def configure(**kwargs):
"""
Sets various configurations for Seaborn from the settings or arguments.
"""
# Get configurations to do modifications on them.
style = kwargs.pop('style', settings.vizualization.style)
context = kwargs.pop('context', settings.vizualization.context)
palette = kwargs.pop('palette', settings.vizualization.palette)
# Set the configurations on SNS
sns.set_style(style)
sns.set_context(context)
sns.set_palette(palette)
return kwargs
##########################################################################
## Seaborn Drawing Utilities
##########################################################################
def plot_kde(series, **kwargs):
"""
Helper function to plot a density estimate of some distribution.
"""
kwargs = configure(**kwargs)
return sns.distplot(np.array(series), **kwargs)
def plot_time(series, **kwargs):
"""
Helper function to plot a simple time series on an axis.
"""
kwargs = configure(**kwargs)
return sns.tsplot(np.array(series), **kwargs)
##########################################################################
## Traces Drawing Utilities
##########################################################################
def plot_workload(results, series='devices', **kwargs):
"""
Helper function to make a timeline plot of reads/writes.
If devices is True, plots timeline by device, else Objects.
"""
kwargs = configure(**kwargs)
outpath = kwargs.pop('savefig', None)
series = {
'devices': 0,
'locations': 1,
'objects': 2,
}[series.lower()]
read_color = kwargs.pop('read_color', '#E20404')
write_color = kwargs.pop('write_color', '#1E05D9')
locations = defaultdict(list)
# Build the data from the read and write time series
for key in ('read', 'write'):
for item in results.results[key]:
locations[item[series]].append(
item + [key]
)
# Sort the data by timestamp
for key in locations:
locations[key].sort(key=itemgetter(0))
# Create the visualization
rx = []
ry = []
wx = []
wy = []
for idx, (key, lst) in enumerate(sorted(locations.items(), key=itemgetter(0), reverse=True)):
for item in lst:
if item[3] > 1000000: continue
if item[-1] == 'read':
rx.append(int(item[3]))
ry.append(idx)
else:
wx.append(int(item[3]))
wy.append(idx)
fig = plt.figure(figsize=(14,4))
plt.ylim((-1,len(locations)))
plt.xlim((-1000, max(max(rx), max(wx))+1000))
plt.yticks(range(len(locations)), sorted(locations.keys(), reverse=True))
plt.scatter(rx, ry, color=read_color, label="reads", alpha=0.5, s=10)
plt.scatter(wx, wy, color=write_color, label="writes", alpha=0.5, s=10)
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
if outpath:
return plt.savefig(outpath, format='svg', dpi=1200)
return plt
def plot_message_traffic(messages):
"""
Plots message traffic on a per-replica basis over time. Input data should
be an iterable of tuples of the form:
(replica, timestamp)
Which (handily) is exactly what is output to the results object.
"""
# Create data frame from results.
columns = ['replica', 'timestamp', 'type', 'latency']
tsize = pd.DataFrame([
dict(zip(columns, message)) for message in messages
])
# Aggregate messages into a single count by replica
messages = tsize.groupby(['timestamp', 'replica']).agg(len).unstack('replica').fillna(0)
# Plot the bar chart
ax = messages.plot(figsize=(14, 6), kind='bar', stacked=True, colormap='nipy_spectral')
# Configure the figure
ax.set_ylabel('number of messages')
ax.set_title('Message Counts by Replica over Time')
return ax
##########################################################################
## Graph Drawing Utilities
##########################################################################
def draw_topology(topo, kind='graph_tool', **kwargs):
"""
Draws a network graph from a topology with either graph tool or networkx.
"""
draw_funcs = {
'graph_tool': draw_graph_tool_topology,
'gt': draw_graph_tool_topology,
'networkx': draw_networkx_topology,
'nx': draw_networkx_topology,
}
if kind not in draw_funcs:
raise BadValue(
"Unknown graph draw kind '{}' chose from one of {}".format(
kind, ", ".join(draw_funcs.keys())
)
)
return draw_funcs[kind](topo, **kwargs)
def draw_graph_tool_topology(topo, **kwargs):
"""
Draws a network topology as loaded from a JSON file with graph tool.
"""
from cloudscope.results.graph import get_prop_type
G = gt.Graph(directed=True)
# Construct the Graph properties
for key, value in topo['meta'].items():
tname, value, key = get_prop_type(value, key)
prop = G.new_graph_property(tname)
G.graph_properties[key] = prop
G.graph_properties[key] = value
# Add the node properties
nprops = set()
for node in topo['nodes']:
for key, val in node.items():
if key in nprops: continue
tname, value, key = get_prop_type(val, key)
prop = G.new_vertex_property(tname)
G.vertex_properties[key] = prop
nprops.add(key)
# Add the edge properties
eprops = set()
for edge in topo['links']:
for key, val in edge.items():
if key in eprops: continue
if key == 'latency':
for name in ('latency_mean', 'latency_stddev', 'latency_weight'):
prop = G.new_edge_property('float')
G.edge_properties[name] = prop
continue
tname, value, key = get_prop_type(val, key)
if key in {'source', 'target'}:
tname = 'string'
prop = G.new_edge_property(tname)
G.edge_properties[key] = prop
eprops.add(key)
# Add the nodes
vertices = {}
for idx,node in enumerate(topo['nodes']):
v = G.add_vertex()
vertices[idx] = v
# Set the vertex properties
for key, value in node.items():
G.vp[key][v] = value
# Add the edges
for edge in topo['links']:
src = vertices[edge['source']]
dst = vertices[edge['target']]
e = G.add_edge(src, dst)
for key, value in edge.items():
if key == 'latency':
G.ep['latency_mean'][e] = float(value[0])
G.ep['latency_stddev'][e] = float(value[1])
G.ep['latency_weight'][e] = 1.0 / float(value[0])
continue
if key in {'source', 'target'}:
G.ep[key][e] = topo['nodes'][value]['id']
else:
G.ep[key][e] = value
# Graph Drawing Time
vlabel = G.vp['id']
vsize = 60
vcolor = G.new_vertex_property('string')
vcmap = {
'stentor': "#9b59b6",
'federated': "#3498db",
'unknown': "#95a5a6",
'eventual': "#e74c3c",
'tag': "#34495e",
'strong': "#2ecc71",
}
for vertex in G.vertices():
vcolor[vertex] = vcmap[G.vp['consistency'][vertex]]
ecolor = G.new_edge_property('string')
ecmap = ColorMap('paired', shuffle=False)
for edge in G.edges():
ecolor[edge] = ecmap(G.ep['area'][edge])
elabel = G.ep['connection']
esize = G.ep['latency_weight']
eweight = G.ep['latency_weight']
esize = gt.prop_to_size(esize, mi=2, ma=5)
pos = gt.arf_layout(G, weight=esize)
gt.graph_draw(
G, pos=pos,
vertex_text=vlabel, vertex_size=vsize, vertex_font_weight=1,
vertex_pen_width=1.3, vertex_fill_color=vcolor,
edge_pen_width=esize, edge_color=ecolor, edge_text=elabel,
output_size=(1200,1200), output="{}.png".format(topo['meta']['title']),
)
def draw_networkx_topology(topo, layout='circular', **kwargs):
"""
Draws a network topology as loaded from a JSON file with networkx.
"""
G = json_graph.node_link_graph(topo)
cmap = {
'stentor': "#9b59b6",
'federated': "#3498db",
'unknown': "#95a5a6",
'eventual': "#e74c3c",
'tag': "#34495e",
'strong': "#2ecc71",
}
lmap = {
'constant': 'solid',
'variable': 'dashed',
'normal': 'dashdot',
}
draw = {
'circular': nx.draw_circular,
'random': nx.draw_random,
'spectral': nx.draw_spectral,
'spring': nx.draw_spring,
'shell': nx.draw_shell,
'graphviz': nx.draw_graphviz,
}[layout]
# Compute the colors and links for the topology
colors = [cmap[n[1]['consistency']] for n in G.nodes(data=True)]
links = [lmap[n[2]['connection']] for n in G.edges(data=True)]
return draw(
G, with_labels=True, font_weight='bold',
node_size=800, node_color=colors,
style=links, edge_color='#333333'
)
| mit |
johndamen/pyeasyplot | easyplot/gui/__init__.py | 1 | 4850 | from PyQt4 import QtGui, QtCore
from . import settings
from .figure import FigureCanvas
from .. import datasets
import matplotlib.figure
import matplotlib.cm
from matplotlib import pyplot as plt
import numpy as np
from ..managers import FigureManager
from . import basewidgets as bw
from functools import partial
class EasyPlotWindow(QtGui.QMainWindow):
def __init__(self):
super().__init__()
self.setCentralWidget(EasyPlotWidget())
class EasyPlotWidget(QtGui.QWidget):
def __init__(self, *datasets, parent=None):
super().__init__(parent=parent)
self.figure = matplotlib.figure.Figure(facecolor='none')
self.figure_manager = FigureManager(self.figure)
self.canvas = FigureCanvas(self.figure)
self.layout = QtGui.QHBoxLayout(self)
self.figure_layout = QtGui.QVBoxLayout()
self.layout.addLayout(self.figure_layout)
self.figure_layout.addWidget(self.canvas)
# plot buttons
self.plot_button_layout = QtGui.QHBoxLayout()
self.plot_button_layout.setContentsMargins(0, 0, 0, 0)
self.plot_button = QtGui.QPushButton('plot')
self.plot_button.clicked.connect(self.plot)
self.plot_button_layout.addWidget(self.plot_button)
self.figure_layout.addLayout(self.plot_button_layout)
# settings
self.settings_toolbox = QtGui.QToolBox()
self.settings_toolbox.setFixedWidth(300)
self.layout.addWidget(self.settings_toolbox)
self.dataset_selector = DatasetSelectorWidget(datasets)
self.dataset_selector.added.connect(self.add_datasets)
self.settings_toolbox.addItem(self.dataset_selector, 'Datasets')
self.fig_settings_widget = settings.FigureSettings(self.figure_manager)
self.fig_settings_widget.changed.connect(self.plot)
self.fig_settings_widget.current_axes_changed.connect(self.change_current_axes)
self.settings_toolbox.addItem(self.fig_settings_widget, 'Figure')
self.ax_settings_widget = settings.AxesSettings()
self.ax_settings_widget.changed.connect(self.set_axsettings)
self.settings_toolbox.addItem(self.ax_settings_widget, 'Axes')
self.plot_stack = settings.PlotStack()
self.plot_stack.changed.connect(self.set_plotsettings)
self.settings_toolbox.addItem(self.plot_stack, 'Plot')
self.plot_settings_widget = settings.LegendSettings()
self.settings_toolbox.addItem(self.plot_settings_widget, 'Legend')
self.plot_settings_widget = settings.ColorbarSettings()
self.settings_toolbox.addItem(self.plot_settings_widget, 'Colorbar')
def change_current_axes(self, i):
old, new = i
old.format(**self.ax_settings_widget.kwargs)
self.ax_settings_widget.set_kwargs(reset=True, **new.settings)
self.canvas.draw()
def set_plotsettings(self, settings):
self.figure_manager.gca().layers.edit_current(**settings)
self.plot()
def set_axsettings(self, settings):
self.figure_manager.gca().format(**settings)
self.canvas.draw()
def plot(self):
for a in self.figure_manager.axes:
a.plot()
self.canvas.draw()
def add_datasets(self, datasets):
for d in datasets:
print('adding', d)
ax = self.figure_manager.gca()
ax.layers.add(d)
xlim, ylim = d.limits()
ax.check_limits(xlim=xlim, ylim=ylim)
self.plot_stack.for_dataset(d)
self.settings_toolbox.setCurrentWidget(self.plot_stack)
self.plot()
class DatasetSelectorWidget(QtGui.QWidget):
added = QtCore.pyqtSignal(list)
def __init__(self, datasets):
self.datasets = datasets
super().__init__()
self.build()
def build(self):
self.layout = QtGui.QVBoxLayout(self)
self.layout.setContentsMargins(10, 0, 0, 0)
self.list_widget = QtGui.QTreeWidget()
self.list_widget.setColumnCount(2)
self.list_widget.setColumnWidth(0, 150)
self.list_widget.setHeaderHidden(True)
self.layout.addWidget(self.list_widget)
for d in self.datasets:
self.list_widget.addTopLevelItem(DatasetItem(d))
self.layout.addItem(QtGui.QSpacerItem(0, 0, QtGui.QSizePolicy.Maximum, QtGui.QSizePolicy.Expanding))
self.button = QtGui.QPushButton('add')
self.button.clicked.connect(self.add)
self.layout.addWidget(self.button)
def add(self):
self.added.emit([i.dataset for i in self.list_widget.selectedItems()])
class DatasetItem(QtGui.QTreeWidgetItem):
def __init__(self, d):
self.dataset = d
super().__init__([self.dataset.LAYER_NAME])
for n, a in self.dataset:
self.addChild(QtGui.QTreeWidgetItem([n, str(a.shape)]))
| gpl-3.0 |
dsullivan7/scikit-learn | examples/decomposition/plot_pca_3d.py | 354 | 2432 | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
=========================================================
Principal components analysis (PCA)
=========================================================
These figures aid in illustrating how a point cloud
can be very flat in one direction--which is where PCA
comes in to choose a direction that is not flat.
"""
print(__doc__)
# Authors: Gael Varoquaux
# Jaques Grobler
# Kevin Hughes
# License: BSD 3 clause
from sklearn.decomposition import PCA
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
###############################################################################
# Create the data
e = np.exp(1)
np.random.seed(4)
def pdf(x):
return 0.5 * (stats.norm(scale=0.25 / e).pdf(x)
+ stats.norm(scale=4 / e).pdf(x))
y = np.random.normal(scale=0.5, size=(30000))
x = np.random.normal(scale=0.5, size=(30000))
z = np.random.normal(scale=0.1, size=len(x))
density = pdf(x) * pdf(y)
pdf_z = pdf(5 * z)
density *= pdf_z
a = x + y
b = 2 * y
c = a - b + z
norm = np.sqrt(a.var() + b.var())
a /= norm
b /= norm
###############################################################################
# Plot the figures
def plot_figs(fig_num, elev, azim):
fig = plt.figure(fig_num, figsize=(4, 3))
plt.clf()
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=elev, azim=azim)
ax.scatter(a[::10], b[::10], c[::10], c=density[::10], marker='+', alpha=.4)
Y = np.c_[a, b, c]
# Using SciPy's SVD, this would be:
# _, pca_score, V = scipy.linalg.svd(Y, full_matrices=False)
pca = PCA(n_components=3)
pca.fit(Y)
pca_score = pca.explained_variance_ratio_
V = pca.components_
x_pca_axis, y_pca_axis, z_pca_axis = V.T * pca_score / pca_score.min()
x_pca_axis, y_pca_axis, z_pca_axis = 3 * V.T
x_pca_plane = np.r_[x_pca_axis[:2], - x_pca_axis[1::-1]]
y_pca_plane = np.r_[y_pca_axis[:2], - y_pca_axis[1::-1]]
z_pca_plane = np.r_[z_pca_axis[:2], - z_pca_axis[1::-1]]
x_pca_plane.shape = (2, 2)
y_pca_plane.shape = (2, 2)
z_pca_plane.shape = (2, 2)
ax.plot_surface(x_pca_plane, y_pca_plane, z_pca_plane)
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
elev = -40
azim = -80
plot_figs(1, elev, azim)
elev = 30
azim = 20
plot_figs(2, elev, azim)
plt.show()
| bsd-3-clause |
mdrumond/tensorflow | tensorflow/contrib/metrics/python/kernel_tests/histogram_ops_test.py | 130 | 9577 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for histogram_ops."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.contrib.metrics.python.ops import histogram_ops
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import variables
from tensorflow.python.platform import test
class Strict1dCumsumTest(test.TestCase):
"""Test this private function."""
def test_empty_tensor_returns_empty(self):
with self.test_session():
tensor = constant_op.constant([])
result = histogram_ops._strict_1d_cumsum(tensor, 0)
expected = constant_op.constant([])
np.testing.assert_array_equal(expected.eval(), result.eval())
def test_length_1_tensor_works(self):
with self.test_session():
tensor = constant_op.constant([3], dtype=dtypes.float32)
result = histogram_ops._strict_1d_cumsum(tensor, 1)
expected = constant_op.constant([3], dtype=dtypes.float32)
np.testing.assert_array_equal(expected.eval(), result.eval())
def test_length_3_tensor_works(self):
with self.test_session():
tensor = constant_op.constant([1, 2, 3], dtype=dtypes.float32)
result = histogram_ops._strict_1d_cumsum(tensor, 3)
expected = constant_op.constant([1, 3, 6], dtype=dtypes.float32)
np.testing.assert_array_equal(expected.eval(), result.eval())
class AUCUsingHistogramTest(test.TestCase):
def setUp(self):
self.rng = np.random.RandomState(0)
def test_empty_labels_and_scores_gives_nan_auc(self):
with self.test_session():
labels = constant_op.constant([], shape=[0], dtype=dtypes.bool)
scores = constant_op.constant([], shape=[0], dtype=dtypes.float32)
score_range = [0, 1.]
auc, update_op = histogram_ops.auc_using_histogram(labels, scores,
score_range)
variables.local_variables_initializer().run()
update_op.run()
self.assertTrue(np.isnan(auc.eval()))
def test_perfect_scores_gives_auc_1(self):
self._check_auc(
nbins=100,
desired_auc=1.0,
score_range=[0, 1.],
num_records=50,
frac_true=0.5,
atol=0.05,
num_updates=1)
def test_terrible_scores_gives_auc_0(self):
self._check_auc(
nbins=100,
desired_auc=0.0,
score_range=[0, 1.],
num_records=50,
frac_true=0.5,
atol=0.05,
num_updates=1)
def test_many_common_conditions(self):
for nbins in [50]:
for desired_auc in [0.3, 0.5, 0.8]:
for score_range in [[-1, 1], [-10, 0]]:
for frac_true in [0.3, 0.8]:
# Tests pass with atol = 0.03. Moved up to 0.05 to avoid flakes.
self._check_auc(
nbins=nbins,
desired_auc=desired_auc,
score_range=score_range,
num_records=100,
frac_true=frac_true,
atol=0.05,
num_updates=50)
def test_large_class_imbalance_still_ok(self):
# With probability frac_true ** num_records, each batch contains only True
# records. In this case, ~ 95%.
# Tests pass with atol = 0.02. Increased to 0.05 to avoid flakes.
self._check_auc(
nbins=100,
desired_auc=0.8,
score_range=[-1, 1.],
num_records=10,
frac_true=0.995,
atol=0.05,
num_updates=1000)
def test_super_accuracy_with_many_bins_and_records(self):
# Test passes with atol = 0.0005. Increased atol to avoid flakes.
self._check_auc(
nbins=1000,
desired_auc=0.75,
score_range=[0, 1.],
num_records=1000,
frac_true=0.5,
atol=0.005,
num_updates=100)
def _check_auc(self,
nbins=100,
desired_auc=0.75,
score_range=None,
num_records=50,
frac_true=0.5,
atol=0.05,
num_updates=10):
"""Check auc accuracy against synthetic data.
Args:
nbins: nbins arg from contrib.metrics.auc_using_histogram.
desired_auc: Number in [0, 1]. The desired auc for synthetic data.
score_range: 2-tuple, (low, high), giving the range of the resultant
scores. Defaults to [0, 1.].
num_records: Positive integer. The number of records to return.
frac_true: Number in (0, 1). Expected fraction of resultant labels that
will be True. This is just in expectation...more or less may actually
be True.
atol: Absolute tolerance for final AUC estimate.
num_updates: Update internal histograms this many times, each with a new
batch of synthetic data, before computing final AUC.
Raises:
AssertionError: If resultant AUC is not within atol of theoretical AUC
from synthetic data.
"""
score_range = [0, 1.] or score_range
with self.test_session():
labels = array_ops.placeholder(dtypes.bool, shape=[num_records])
scores = array_ops.placeholder(dtypes.float32, shape=[num_records])
auc, update_op = histogram_ops.auc_using_histogram(
labels, scores, score_range, nbins=nbins)
variables.local_variables_initializer().run()
# Updates, then extract auc.
for _ in range(num_updates):
labels_a, scores_a = synthetic_data(desired_auc, score_range,
num_records, self.rng, frac_true)
update_op.run(feed_dict={labels: labels_a, scores: scores_a})
labels_a, scores_a = synthetic_data(desired_auc, score_range, num_records,
self.rng, frac_true)
# Fetch current auc, and verify that fetching again doesn't change it.
auc_eval = auc.eval()
self.assertAlmostEqual(auc_eval, auc.eval(), places=5)
msg = ('nbins: %s, desired_auc: %s, score_range: %s, '
'num_records: %s, frac_true: %s, num_updates: %s') % (nbins,
desired_auc,
score_range,
num_records,
frac_true,
num_updates)
np.testing.assert_allclose(desired_auc, auc_eval, atol=atol, err_msg=msg)
def synthetic_data(desired_auc, score_range, num_records, rng, frac_true):
"""Create synthetic boolean_labels and scores with adjustable auc.
Args:
desired_auc: Number in [0, 1], the theoretical AUC of resultant data.
score_range: 2-tuple, (low, high), giving the range of the resultant scores
num_records: Positive integer. The number of records to return.
rng: Initialized np.random.RandomState random number generator
frac_true: Number in (0, 1). Expected fraction of resultant labels that
will be True. This is just in expectation...more or less may actually be
True.
Returns:
boolean_labels: np.array, dtype=bool.
scores: np.array, dtype=np.float32
"""
# We prove here why the method (below) for computing AUC works. Of course we
# also checked this against sklearn.metrics.roc_auc_curve.
#
# First do this for score_range = [0, 1], then rescale.
# WLOG assume AUC >= 0.5, otherwise we will solve for AUC >= 0.5 then swap
# the labels.
# So for AUC in [0, 1] we create False and True labels
# and corresponding scores drawn from:
# F ~ U[0, 1], T ~ U[x, 1]
# We have,
# AUC
# = P[T > F]
# = P[T > F | F < x] P[F < x] + P[T > F | F > x] P[F > x]
# = (1 * x) + (0.5 * (1 - x)).
# Inverting, we have:
# x = 2 * AUC - 1, when AUC >= 0.5.
assert 0 <= desired_auc <= 1
assert 0 < frac_true < 1
if desired_auc < 0.5:
flip_labels = True
desired_auc = 1 - desired_auc
frac_true = 1 - frac_true
else:
flip_labels = False
x = 2 * desired_auc - 1
labels = rng.binomial(1, frac_true, size=num_records).astype(bool)
num_true = labels.sum()
num_false = num_records - labels.sum()
# Draw F ~ U[0, 1], and T ~ U[x, 1]
false_scores = rng.rand(num_false)
true_scores = x + rng.rand(num_true) * (1 - x)
# Reshape [0, 1] to score_range.
def reshape(scores):
return score_range[0] + scores * (score_range[1] - score_range[0])
false_scores = reshape(false_scores)
true_scores = reshape(true_scores)
# Place into one array corresponding with the labels.
scores = np.nan * np.ones(num_records, dtype=np.float32)
scores[labels] = true_scores
scores[~labels] = false_scores
if flip_labels:
labels = ~labels
return labels, scores
if __name__ == '__main__':
test.main()
| apache-2.0 |
sauliusl/seaborn | seaborn/tests/test_axisgrid.py | 3 | 51598 | import warnings
import numpy as np
import pandas as pd
from scipy import stats
import matplotlib as mpl
import matplotlib.pyplot as plt
from distutils.version import LooseVersion
import pytest
import nose.tools as nt
import numpy.testing as npt
from numpy.testing.decorators import skipif
try:
import pandas.testing as tm
except ImportError:
import pandas.util.testing as tm
from .. import axisgrid as ag
from .. import rcmod
from ..palettes import color_palette
from ..distributions import kdeplot, _freedman_diaconis_bins
from ..categorical import pointplot
from ..utils import categorical_order
rs = np.random.RandomState(0)
old_matplotlib = LooseVersion(mpl.__version__) < "1.4"
pandas_has_categoricals = LooseVersion(pd.__version__) >= "0.15"
class TestFacetGrid(object):
df = pd.DataFrame(dict(x=rs.normal(size=60),
y=rs.gamma(4, size=60),
a=np.repeat(list("abc"), 20),
b=np.tile(list("mn"), 30),
c=np.tile(list("tuv"), 20),
d=np.tile(list("abcdefghijkl"), 5)))
def test_self_data(self):
g = ag.FacetGrid(self.df)
nt.assert_is(g.data, self.df)
def test_self_fig(self):
g = ag.FacetGrid(self.df)
nt.assert_is_instance(g.fig, plt.Figure)
def test_self_axes(self):
g = ag.FacetGrid(self.df, row="a", col="b", hue="c")
for ax in g.axes.flat:
nt.assert_is_instance(ax, plt.Axes)
def test_axes_array_size(self):
g1 = ag.FacetGrid(self.df)
nt.assert_equal(g1.axes.shape, (1, 1))
g2 = ag.FacetGrid(self.df, row="a")
nt.assert_equal(g2.axes.shape, (3, 1))
g3 = ag.FacetGrid(self.df, col="b")
nt.assert_equal(g3.axes.shape, (1, 2))
g4 = ag.FacetGrid(self.df, hue="c")
nt.assert_equal(g4.axes.shape, (1, 1))
g5 = ag.FacetGrid(self.df, row="a", col="b", hue="c")
nt.assert_equal(g5.axes.shape, (3, 2))
for ax in g5.axes.flat:
nt.assert_is_instance(ax, plt.Axes)
def test_single_axes(self):
g1 = ag.FacetGrid(self.df)
nt.assert_is_instance(g1.ax, plt.Axes)
g2 = ag.FacetGrid(self.df, row="a")
with nt.assert_raises(AttributeError):
g2.ax
g3 = ag.FacetGrid(self.df, col="a")
with nt.assert_raises(AttributeError):
g3.ax
g4 = ag.FacetGrid(self.df, col="a", row="b")
with nt.assert_raises(AttributeError):
g4.ax
def test_col_wrap(self):
n = len(self.df.d.unique())
g = ag.FacetGrid(self.df, col="d")
assert g.axes.shape == (1, n)
assert g.facet_axis(0, 8) is g.axes[0, 8]
g_wrap = ag.FacetGrid(self.df, col="d", col_wrap=4)
assert g_wrap.axes.shape == (n,)
assert g_wrap.facet_axis(0, 8) is g_wrap.axes[8]
assert g_wrap._ncol == 4
assert g_wrap._nrow == (n / 4)
with pytest.raises(ValueError):
g = ag.FacetGrid(self.df, row="b", col="d", col_wrap=4)
df = self.df.copy()
df.loc[df.d == "j"] = np.nan
g_missing = ag.FacetGrid(df, col="d")
assert g_missing.axes.shape == (1, n - 1)
g_missing_wrap = ag.FacetGrid(df, col="d", col_wrap=4)
assert g_missing_wrap.axes.shape == (n - 1,)
g = ag.FacetGrid(self.df, col="d", col_wrap=1)
assert len(list(g.facet_data())) == n
def test_normal_axes(self):
null = np.empty(0, object).flat
g = ag.FacetGrid(self.df)
npt.assert_array_equal(g._bottom_axes, g.axes.flat)
npt.assert_array_equal(g._not_bottom_axes, null)
npt.assert_array_equal(g._left_axes, g.axes.flat)
npt.assert_array_equal(g._not_left_axes, null)
npt.assert_array_equal(g._inner_axes, null)
g = ag.FacetGrid(self.df, col="c")
npt.assert_array_equal(g._bottom_axes, g.axes.flat)
npt.assert_array_equal(g._not_bottom_axes, null)
npt.assert_array_equal(g._left_axes, g.axes[:, 0].flat)
npt.assert_array_equal(g._not_left_axes, g.axes[:, 1:].flat)
npt.assert_array_equal(g._inner_axes, null)
g = ag.FacetGrid(self.df, row="c")
npt.assert_array_equal(g._bottom_axes, g.axes[-1, :].flat)
npt.assert_array_equal(g._not_bottom_axes, g.axes[:-1, :].flat)
npt.assert_array_equal(g._left_axes, g.axes.flat)
npt.assert_array_equal(g._not_left_axes, null)
npt.assert_array_equal(g._inner_axes, null)
g = ag.FacetGrid(self.df, col="a", row="c")
npt.assert_array_equal(g._bottom_axes, g.axes[-1, :].flat)
npt.assert_array_equal(g._not_bottom_axes, g.axes[:-1, :].flat)
npt.assert_array_equal(g._left_axes, g.axes[:, 0].flat)
npt.assert_array_equal(g._not_left_axes, g.axes[:, 1:].flat)
npt.assert_array_equal(g._inner_axes, g.axes[:-1, 1:].flat)
def test_wrapped_axes(self):
null = np.empty(0, object).flat
g = ag.FacetGrid(self.df, col="a", col_wrap=2)
npt.assert_array_equal(g._bottom_axes,
g.axes[np.array([1, 2])].flat)
npt.assert_array_equal(g._not_bottom_axes, g.axes[:1].flat)
npt.assert_array_equal(g._left_axes, g.axes[np.array([0, 2])].flat)
npt.assert_array_equal(g._not_left_axes, g.axes[np.array([1])].flat)
npt.assert_array_equal(g._inner_axes, null)
def test_figure_size(self):
g = ag.FacetGrid(self.df, row="a", col="b")
npt.assert_array_equal(g.fig.get_size_inches(), (6, 9))
g = ag.FacetGrid(self.df, row="a", col="b", height=6)
npt.assert_array_equal(g.fig.get_size_inches(), (12, 18))
g = ag.FacetGrid(self.df, col="c", height=4, aspect=.5)
npt.assert_array_equal(g.fig.get_size_inches(), (6, 4))
def test_figure_size_with_legend(self):
g1 = ag.FacetGrid(self.df, col="a", hue="c", height=4, aspect=.5)
npt.assert_array_equal(g1.fig.get_size_inches(), (6, 4))
g1.add_legend()
nt.assert_greater(g1.fig.get_size_inches()[0], 6)
g2 = ag.FacetGrid(self.df, col="a", hue="c", height=4, aspect=.5,
legend_out=False)
npt.assert_array_equal(g2.fig.get_size_inches(), (6, 4))
g2.add_legend()
npt.assert_array_equal(g2.fig.get_size_inches(), (6, 4))
def test_legend_data(self):
g1 = ag.FacetGrid(self.df, hue="a")
g1.map(plt.plot, "x", "y")
g1.add_legend()
palette = color_palette(n_colors=3)
nt.assert_equal(g1._legend.get_title().get_text(), "a")
a_levels = sorted(self.df.a.unique())
lines = g1._legend.get_lines()
nt.assert_equal(len(lines), len(a_levels))
for line, hue in zip(lines, palette):
nt.assert_equal(line.get_color(), hue)
labels = g1._legend.get_texts()
nt.assert_equal(len(labels), len(a_levels))
for label, level in zip(labels, a_levels):
nt.assert_equal(label.get_text(), level)
def test_legend_data_missing_level(self):
g1 = ag.FacetGrid(self.df, hue="a", hue_order=list("azbc"))
g1.map(plt.plot, "x", "y")
g1.add_legend()
b, g, r, p = color_palette(n_colors=4)
palette = [b, r, p]
nt.assert_equal(g1._legend.get_title().get_text(), "a")
a_levels = sorted(self.df.a.unique())
lines = g1._legend.get_lines()
nt.assert_equal(len(lines), len(a_levels))
for line, hue in zip(lines, palette):
nt.assert_equal(line.get_color(), hue)
labels = g1._legend.get_texts()
nt.assert_equal(len(labels), 4)
for label, level in zip(labels, list("azbc")):
nt.assert_equal(label.get_text(), level)
def test_get_boolean_legend_data(self):
self.df["b_bool"] = self.df.b == "m"
g1 = ag.FacetGrid(self.df, hue="b_bool")
g1.map(plt.plot, "x", "y")
g1.add_legend()
palette = color_palette(n_colors=2)
nt.assert_equal(g1._legend.get_title().get_text(), "b_bool")
b_levels = list(map(str, categorical_order(self.df.b_bool)))
lines = g1._legend.get_lines()
nt.assert_equal(len(lines), len(b_levels))
for line, hue in zip(lines, palette):
nt.assert_equal(line.get_color(), hue)
labels = g1._legend.get_texts()
nt.assert_equal(len(labels), len(b_levels))
for label, level in zip(labels, b_levels):
nt.assert_equal(label.get_text(), level)
def test_legend_options(self):
g1 = ag.FacetGrid(self.df, hue="b")
g1.map(plt.plot, "x", "y")
g1.add_legend()
def test_legendout_with_colwrap(self):
g = ag.FacetGrid(self.df, col="d", hue='b',
col_wrap=4, legend_out=False)
g.map(plt.plot, "x", "y", linewidth=3)
g.add_legend()
def test_subplot_kws(self):
g = ag.FacetGrid(self.df, despine=False,
subplot_kws=dict(projection="polar"))
for ax in g.axes.flat:
nt.assert_true("PolarAxesSubplot" in str(type(ax)))
@skipif(old_matplotlib)
def test_gridspec_kws(self):
ratios = [3, 1, 2]
gskws = dict(width_ratios=ratios)
g = ag.FacetGrid(self.df, col='c', row='a', gridspec_kws=gskws)
for ax in g.axes.flat:
ax.set_xticks([])
ax.set_yticks([])
g.fig.tight_layout()
for (l, m, r) in g.axes:
assert l.get_position().width > m.get_position().width
assert r.get_position().width > m.get_position().width
@skipif(old_matplotlib)
def test_gridspec_kws_col_wrap(self):
ratios = [3, 1, 2, 1, 1]
gskws = dict(width_ratios=ratios)
with warnings.catch_warnings():
warnings.resetwarnings()
warnings.simplefilter("always")
npt.assert_warns(UserWarning, ag.FacetGrid, self.df, col='d',
col_wrap=5, gridspec_kws=gskws)
@skipif(not old_matplotlib)
def test_gridsic_kws_old_mpl(self):
ratios = [3, 1, 2]
gskws = dict(width_ratios=ratios, height_ratios=ratios)
with warnings.catch_warnings():
warnings.resetwarnings()
warnings.simplefilter("always")
npt.assert_warns(UserWarning, ag.FacetGrid, self.df, col='c',
row='a', gridspec_kws=gskws)
def test_data_generator(self):
g = ag.FacetGrid(self.df, row="a")
d = list(g.facet_data())
nt.assert_equal(len(d), 3)
tup, data = d[0]
nt.assert_equal(tup, (0, 0, 0))
nt.assert_true((data["a"] == "a").all())
tup, data = d[1]
nt.assert_equal(tup, (1, 0, 0))
nt.assert_true((data["a"] == "b").all())
g = ag.FacetGrid(self.df, row="a", col="b")
d = list(g.facet_data())
nt.assert_equal(len(d), 6)
tup, data = d[0]
nt.assert_equal(tup, (0, 0, 0))
nt.assert_true((data["a"] == "a").all())
nt.assert_true((data["b"] == "m").all())
tup, data = d[1]
nt.assert_equal(tup, (0, 1, 0))
nt.assert_true((data["a"] == "a").all())
nt.assert_true((data["b"] == "n").all())
tup, data = d[2]
nt.assert_equal(tup, (1, 0, 0))
nt.assert_true((data["a"] == "b").all())
nt.assert_true((data["b"] == "m").all())
g = ag.FacetGrid(self.df, hue="c")
d = list(g.facet_data())
nt.assert_equal(len(d), 3)
tup, data = d[1]
nt.assert_equal(tup, (0, 0, 1))
nt.assert_true((data["c"] == "u").all())
def test_map(self):
g = ag.FacetGrid(self.df, row="a", col="b", hue="c")
g.map(plt.plot, "x", "y", linewidth=3)
lines = g.axes[0, 0].lines
nt.assert_equal(len(lines), 3)
line1, _, _ = lines
nt.assert_equal(line1.get_linewidth(), 3)
x, y = line1.get_data()
mask = (self.df.a == "a") & (self.df.b == "m") & (self.df.c == "t")
npt.assert_array_equal(x, self.df.x[mask])
npt.assert_array_equal(y, self.df.y[mask])
def test_map_dataframe(self):
g = ag.FacetGrid(self.df, row="a", col="b", hue="c")
def plot(x, y, data=None, **kws):
plt.plot(data[x], data[y], **kws)
g.map_dataframe(plot, "x", "y", linestyle="--")
lines = g.axes[0, 0].lines
nt.assert_equal(len(lines), 3)
line1, _, _ = lines
nt.assert_equal(line1.get_linestyle(), "--")
x, y = line1.get_data()
mask = (self.df.a == "a") & (self.df.b == "m") & (self.df.c == "t")
npt.assert_array_equal(x, self.df.x[mask])
npt.assert_array_equal(y, self.df.y[mask])
def test_set(self):
g = ag.FacetGrid(self.df, row="a", col="b")
xlim = (-2, 5)
ylim = (3, 6)
xticks = [-2, 0, 3, 5]
yticks = [3, 4.5, 6]
g.set(xlim=xlim, ylim=ylim, xticks=xticks, yticks=yticks)
for ax in g.axes.flat:
npt.assert_array_equal(ax.get_xlim(), xlim)
npt.assert_array_equal(ax.get_ylim(), ylim)
npt.assert_array_equal(ax.get_xticks(), xticks)
npt.assert_array_equal(ax.get_yticks(), yticks)
def test_set_titles(self):
g = ag.FacetGrid(self.df, row="a", col="b")
g.map(plt.plot, "x", "y")
# Test the default titles
nt.assert_equal(g.axes[0, 0].get_title(), "a = a | b = m")
nt.assert_equal(g.axes[0, 1].get_title(), "a = a | b = n")
nt.assert_equal(g.axes[1, 0].get_title(), "a = b | b = m")
# Test a provided title
g.set_titles("{row_var} == {row_name} \/ {col_var} == {col_name}")
nt.assert_equal(g.axes[0, 0].get_title(), "a == a \/ b == m")
nt.assert_equal(g.axes[0, 1].get_title(), "a == a \/ b == n")
nt.assert_equal(g.axes[1, 0].get_title(), "a == b \/ b == m")
# Test a single row
g = ag.FacetGrid(self.df, col="b")
g.map(plt.plot, "x", "y")
# Test the default titles
nt.assert_equal(g.axes[0, 0].get_title(), "b = m")
nt.assert_equal(g.axes[0, 1].get_title(), "b = n")
# test with dropna=False
g = ag.FacetGrid(self.df, col="b", hue="b", dropna=False)
g.map(plt.plot, 'x', 'y')
def test_set_titles_margin_titles(self):
g = ag.FacetGrid(self.df, row="a", col="b", margin_titles=True)
g.map(plt.plot, "x", "y")
# Test the default titles
nt.assert_equal(g.axes[0, 0].get_title(), "b = m")
nt.assert_equal(g.axes[0, 1].get_title(), "b = n")
nt.assert_equal(g.axes[1, 0].get_title(), "")
# Test the row "titles"
nt.assert_equal(g.axes[0, 1].texts[0].get_text(), "a = a")
nt.assert_equal(g.axes[1, 1].texts[0].get_text(), "a = b")
# Test a provided title
g.set_titles(col_template="{col_var} == {col_name}")
nt.assert_equal(g.axes[0, 0].get_title(), "b == m")
nt.assert_equal(g.axes[0, 1].get_title(), "b == n")
nt.assert_equal(g.axes[1, 0].get_title(), "")
def test_set_ticklabels(self):
g = ag.FacetGrid(self.df, row="a", col="b")
g.map(plt.plot, "x", "y")
xlab = [l.get_text() + "h" for l in g.axes[1, 0].get_xticklabels()]
ylab = [l.get_text() for l in g.axes[1, 0].get_yticklabels()]
g.set_xticklabels(xlab)
g.set_yticklabels(rotation=90)
got_x = [l.get_text() for l in g.axes[1, 1].get_xticklabels()]
got_y = [l.get_text() for l in g.axes[0, 0].get_yticklabels()]
npt.assert_array_equal(got_x, xlab)
npt.assert_array_equal(got_y, ylab)
x, y = np.arange(10), np.arange(10)
df = pd.DataFrame(np.c_[x, y], columns=["x", "y"])
g = ag.FacetGrid(df).map(pointplot, "x", "y", order=x)
g.set_xticklabels(step=2)
got_x = [int(l.get_text()) for l in g.axes[0, 0].get_xticklabels()]
npt.assert_array_equal(x[::2], got_x)
g = ag.FacetGrid(self.df, col="d", col_wrap=5)
g.map(plt.plot, "x", "y")
g.set_xticklabels(rotation=45)
g.set_yticklabels(rotation=75)
for ax in g._bottom_axes:
for l in ax.get_xticklabels():
nt.assert_equal(l.get_rotation(), 45)
for ax in g._left_axes:
for l in ax.get_yticklabels():
nt.assert_equal(l.get_rotation(), 75)
def test_set_axis_labels(self):
g = ag.FacetGrid(self.df, row="a", col="b")
g.map(plt.plot, "x", "y")
xlab = 'xx'
ylab = 'yy'
g.set_axis_labels(xlab, ylab)
got_x = [ax.get_xlabel() for ax in g.axes[-1, :]]
got_y = [ax.get_ylabel() for ax in g.axes[:, 0]]
npt.assert_array_equal(got_x, xlab)
npt.assert_array_equal(got_y, ylab)
def test_axis_lims(self):
g = ag.FacetGrid(self.df, row="a", col="b", xlim=(0, 4), ylim=(-2, 3))
nt.assert_equal(g.axes[0, 0].get_xlim(), (0, 4))
nt.assert_equal(g.axes[0, 0].get_ylim(), (-2, 3))
def test_data_orders(self):
g = ag.FacetGrid(self.df, row="a", col="b", hue="c")
nt.assert_equal(g.row_names, list("abc"))
nt.assert_equal(g.col_names, list("mn"))
nt.assert_equal(g.hue_names, list("tuv"))
nt.assert_equal(g.axes.shape, (3, 2))
g = ag.FacetGrid(self.df, row="a", col="b", hue="c",
row_order=list("bca"),
col_order=list("nm"),
hue_order=list("vtu"))
nt.assert_equal(g.row_names, list("bca"))
nt.assert_equal(g.col_names, list("nm"))
nt.assert_equal(g.hue_names, list("vtu"))
nt.assert_equal(g.axes.shape, (3, 2))
g = ag.FacetGrid(self.df, row="a", col="b", hue="c",
row_order=list("bcda"),
col_order=list("nom"),
hue_order=list("qvtu"))
nt.assert_equal(g.row_names, list("bcda"))
nt.assert_equal(g.col_names, list("nom"))
nt.assert_equal(g.hue_names, list("qvtu"))
nt.assert_equal(g.axes.shape, (4, 3))
def test_palette(self):
rcmod.set()
g = ag.FacetGrid(self.df, hue="c")
assert g._colors == color_palette(n_colors=len(self.df.c.unique()))
g = ag.FacetGrid(self.df, hue="d")
assert g._colors == color_palette("husl", len(self.df.d.unique()))
g = ag.FacetGrid(self.df, hue="c", palette="Set2")
assert g._colors == color_palette("Set2", len(self.df.c.unique()))
dict_pal = dict(t="red", u="green", v="blue")
list_pal = color_palette(["red", "green", "blue"], 3)
g = ag.FacetGrid(self.df, hue="c", palette=dict_pal)
assert g._colors == list_pal
list_pal = color_palette(["green", "blue", "red"], 3)
g = ag.FacetGrid(self.df, hue="c", hue_order=list("uvt"),
palette=dict_pal)
assert g._colors == list_pal
def test_hue_kws(self):
kws = dict(marker=["o", "s", "D"])
g = ag.FacetGrid(self.df, hue="c", hue_kws=kws)
g.map(plt.plot, "x", "y")
for line, marker in zip(g.axes[0, 0].lines, kws["marker"]):
nt.assert_equal(line.get_marker(), marker)
def test_dropna(self):
df = self.df.copy()
hasna = pd.Series(np.tile(np.arange(6), 10), dtype=np.float)
hasna[hasna == 5] = np.nan
df["hasna"] = hasna
g = ag.FacetGrid(df, dropna=False, row="hasna")
nt.assert_equal(g._not_na.sum(), 60)
g = ag.FacetGrid(df, dropna=True, row="hasna")
nt.assert_equal(g._not_na.sum(), 50)
def test_unicode_column_label_with_rows(self):
# use a smaller copy of the default testing data frame:
df = self.df.copy()
df = df[["a", "b", "x"]]
# rename column 'a' (which will be used for the columns in the grid)
# by using a Unicode string:
unicode_column_label = u"\u01ff\u02ff\u03ff"
df = df.rename(columns={"a": unicode_column_label})
# ensure that the data frame columns have the expected names:
nt.assert_equal(list(df.columns), [unicode_column_label, "b", "x"])
# plot the grid -- if successful, no UnicodeEncodingError should
# occur:
g = ag.FacetGrid(df, col=unicode_column_label, row="b")
g = g.map(plt.plot, "x")
def test_unicode_column_label_no_rows(self):
# use a smaller copy of the default testing data frame:
df = self.df.copy()
df = df[["a", "x"]]
# rename column 'a' (which will be used for the columns in the grid)
# by using a Unicode string:
unicode_column_label = u"\u01ff\u02ff\u03ff"
df = df.rename(columns={"a": unicode_column_label})
# ensure that the data frame columns have the expected names:
nt.assert_equal(list(df.columns), [unicode_column_label, "x"])
# plot the grid -- if successful, no UnicodeEncodingError should
# occur:
g = ag.FacetGrid(df, col=unicode_column_label)
g = g.map(plt.plot, "x")
def test_unicode_row_label_with_columns(self):
# use a smaller copy of the default testing data frame:
df = self.df.copy()
df = df[["a", "b", "x"]]
# rename column 'b' (which will be used for the rows in the grid)
# by using a Unicode string:
unicode_row_label = u"\u01ff\u02ff\u03ff"
df = df.rename(columns={"b": unicode_row_label})
# ensure that the data frame columns have the expected names:
nt.assert_equal(list(df.columns), ["a", unicode_row_label, "x"])
# plot the grid -- if successful, no UnicodeEncodingError should
# occur:
g = ag.FacetGrid(df, col="a", row=unicode_row_label)
g = g.map(plt.plot, "x")
def test_unicode_row_label_no_columns(self):
# use a smaller copy of the default testing data frame:
df = self.df.copy()
df = df[["b", "x"]]
# rename column 'b' (which will be used for the rows in the grid)
# by using a Unicode string:
unicode_row_label = u"\u01ff\u02ff\u03ff"
df = df.rename(columns={"b": unicode_row_label})
# ensure that the data frame columns have the expected names:
nt.assert_equal(list(df.columns), [unicode_row_label, "x"])
# plot the grid -- if successful, no UnicodeEncodingError should
# occur:
g = ag.FacetGrid(df, row=unicode_row_label)
g = g.map(plt.plot, "x")
def test_unicode_content_with_row_and_column(self):
df = self.df.copy()
# replace content of column 'a' (which will form the columns in the
# grid) by Unicode characters:
unicode_column_val = np.repeat((u'\u01ff', u'\u02ff', u'\u03ff'), 20)
df["a"] = unicode_column_val
# make sure that the replacement worked as expected:
nt.assert_equal(
list(df["a"]),
[u'\u01ff'] * 20 + [u'\u02ff'] * 20 + [u'\u03ff'] * 20)
# plot the grid -- if successful, no UnicodeEncodingError should
# occur:
g = ag.FacetGrid(df, col="a", row="b")
g = g.map(plt.plot, "x")
def test_unicode_content_no_rows(self):
df = self.df.copy()
# replace content of column 'a' (which will form the columns in the
# grid) by Unicode characters:
unicode_column_val = np.repeat((u'\u01ff', u'\u02ff', u'\u03ff'), 20)
df["a"] = unicode_column_val
# make sure that the replacement worked as expected:
nt.assert_equal(
list(df["a"]),
[u'\u01ff'] * 20 + [u'\u02ff'] * 20 + [u'\u03ff'] * 20)
# plot the grid -- if successful, no UnicodeEncodingError should
# occur:
g = ag.FacetGrid(df, col="a")
g = g.map(plt.plot, "x")
def test_unicode_content_no_columns(self):
df = self.df.copy()
# replace content of column 'a' (which will form the rows in the
# grid) by Unicode characters:
unicode_column_val = np.repeat((u'\u01ff', u'\u02ff', u'\u03ff'), 20)
df["b"] = unicode_column_val
# make sure that the replacement worked as expected:
nt.assert_equal(
list(df["b"]),
[u'\u01ff'] * 20 + [u'\u02ff'] * 20 + [u'\u03ff'] * 20)
# plot the grid -- if successful, no UnicodeEncodingError should
# occur:
g = ag.FacetGrid(df, row="b")
g = g.map(plt.plot, "x")
@skipif(not pandas_has_categoricals)
def test_categorical_column_missing_categories(self):
df = self.df.copy()
df['a'] = df['a'].astype('category')
g = ag.FacetGrid(df[df['a'] == 'a'], col="a", col_wrap=1)
nt.assert_equal(g.axes.shape, (len(df['a'].cat.categories),))
def test_categorical_warning(self):
g = ag.FacetGrid(self.df, col="b")
with warnings.catch_warnings():
warnings.resetwarnings()
warnings.simplefilter("always")
npt.assert_warns(UserWarning, g.map, pointplot, "b", "x")
class TestPairGrid(object):
rs = np.random.RandomState(sum(map(ord, "PairGrid")))
df = pd.DataFrame(dict(x=rs.normal(size=60),
y=rs.randint(0, 4, size=(60)),
z=rs.gamma(3, size=60),
a=np.repeat(list("abc"), 20),
b=np.repeat(list("abcdefghijkl"), 5)))
def test_self_data(self):
g = ag.PairGrid(self.df)
nt.assert_is(g.data, self.df)
def test_ignore_datelike_data(self):
df = self.df.copy()
df['date'] = pd.date_range('2010-01-01', periods=len(df), freq='d')
result = ag.PairGrid(self.df).data
expected = df.drop('date', axis=1)
tm.assert_frame_equal(result, expected)
def test_self_fig(self):
g = ag.PairGrid(self.df)
nt.assert_is_instance(g.fig, plt.Figure)
def test_self_axes(self):
g = ag.PairGrid(self.df)
for ax in g.axes.flat:
nt.assert_is_instance(ax, plt.Axes)
def test_default_axes(self):
g = ag.PairGrid(self.df)
nt.assert_equal(g.axes.shape, (3, 3))
nt.assert_equal(g.x_vars, ["x", "y", "z"])
nt.assert_equal(g.y_vars, ["x", "y", "z"])
nt.assert_true(g.square_grid)
def test_specific_square_axes(self):
vars = ["z", "x"]
g = ag.PairGrid(self.df, vars=vars)
nt.assert_equal(g.axes.shape, (len(vars), len(vars)))
nt.assert_equal(g.x_vars, vars)
nt.assert_equal(g.y_vars, vars)
nt.assert_true(g.square_grid)
def test_specific_nonsquare_axes(self):
x_vars = ["x", "y"]
y_vars = ["z", "y", "x"]
g = ag.PairGrid(self.df, x_vars=x_vars, y_vars=y_vars)
nt.assert_equal(g.axes.shape, (len(y_vars), len(x_vars)))
nt.assert_equal(g.x_vars, x_vars)
nt.assert_equal(g.y_vars, y_vars)
nt.assert_true(not g.square_grid)
x_vars = ["x", "y"]
y_vars = "z"
g = ag.PairGrid(self.df, x_vars=x_vars, y_vars=y_vars)
nt.assert_equal(g.axes.shape, (len(y_vars), len(x_vars)))
nt.assert_equal(g.x_vars, list(x_vars))
nt.assert_equal(g.y_vars, list(y_vars))
nt.assert_true(not g.square_grid)
def test_specific_square_axes_with_array(self):
vars = np.array(["z", "x"])
g = ag.PairGrid(self.df, vars=vars)
nt.assert_equal(g.axes.shape, (len(vars), len(vars)))
nt.assert_equal(g.x_vars, list(vars))
nt.assert_equal(g.y_vars, list(vars))
nt.assert_true(g.square_grid)
def test_specific_nonsquare_axes_with_array(self):
x_vars = np.array(["x", "y"])
y_vars = np.array(["z", "y", "x"])
g = ag.PairGrid(self.df, x_vars=x_vars, y_vars=y_vars)
nt.assert_equal(g.axes.shape, (len(y_vars), len(x_vars)))
nt.assert_equal(g.x_vars, list(x_vars))
nt.assert_equal(g.y_vars, list(y_vars))
nt.assert_true(not g.square_grid)
def test_size(self):
g1 = ag.PairGrid(self.df, height=3)
npt.assert_array_equal(g1.fig.get_size_inches(), (9, 9))
g2 = ag.PairGrid(self.df, height=4, aspect=.5)
npt.assert_array_equal(g2.fig.get_size_inches(), (6, 12))
g3 = ag.PairGrid(self.df, y_vars=["z"], x_vars=["x", "y"],
height=2, aspect=2)
npt.assert_array_equal(g3.fig.get_size_inches(), (8, 2))
def test_map(self):
vars = ["x", "y", "z"]
g1 = ag.PairGrid(self.df)
g1.map(plt.scatter)
for i, axes_i in enumerate(g1.axes):
for j, ax in enumerate(axes_i):
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
g2 = ag.PairGrid(self.df, "a")
g2.map(plt.scatter)
for i, axes_i in enumerate(g2.axes):
for j, ax in enumerate(axes_i):
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
for k, k_level in enumerate(self.df.a.unique()):
x_in_k = x_in[self.df.a == k_level]
y_in_k = y_in[self.df.a == k_level]
x_out, y_out = ax.collections[k].get_offsets().T
npt.assert_array_equal(x_in_k, x_out)
npt.assert_array_equal(y_in_k, y_out)
def test_map_nonsquare(self):
x_vars = ["x"]
y_vars = ["y", "z"]
g = ag.PairGrid(self.df, x_vars=x_vars, y_vars=y_vars)
g.map(plt.scatter)
x_in = self.df.x
for i, i_var in enumerate(y_vars):
ax = g.axes[i, 0]
y_in = self.df[i_var]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
def test_map_lower(self):
vars = ["x", "y", "z"]
g = ag.PairGrid(self.df)
g.map_lower(plt.scatter)
for i, j in zip(*np.tril_indices_from(g.axes, -1)):
ax = g.axes[i, j]
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
for i, j in zip(*np.triu_indices_from(g.axes)):
ax = g.axes[i, j]
nt.assert_equal(len(ax.collections), 0)
def test_map_upper(self):
vars = ["x", "y", "z"]
g = ag.PairGrid(self.df)
g.map_upper(plt.scatter)
for i, j in zip(*np.triu_indices_from(g.axes, 1)):
ax = g.axes[i, j]
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
for i, j in zip(*np.tril_indices_from(g.axes)):
ax = g.axes[i, j]
nt.assert_equal(len(ax.collections), 0)
@skipif(old_matplotlib)
def test_map_diag(self):
g1 = ag.PairGrid(self.df)
g1.map_diag(plt.hist)
for ax in g1.diag_axes:
nt.assert_equal(len(ax.patches), 10)
g2 = ag.PairGrid(self.df)
g2.map_diag(plt.hist, bins=15)
for ax in g2.diag_axes:
nt.assert_equal(len(ax.patches), 15)
g3 = ag.PairGrid(self.df, hue="a")
g3.map_diag(plt.hist)
for ax in g3.diag_axes:
nt.assert_equal(len(ax.patches), 30)
g4 = ag.PairGrid(self.df, hue="a")
g4.map_diag(plt.hist, histtype='step')
for ax in g4.diag_axes:
for ptch in ax.patches:
nt.assert_equal(ptch.fill, False)
@skipif(old_matplotlib)
def test_map_diag_color(self):
color = "red"
rgb_color = mpl.colors.colorConverter.to_rgba(color)
g1 = ag.PairGrid(self.df)
g1.map_diag(plt.hist, color=color)
for ax in g1.diag_axes:
for patch in ax.patches:
nt.assert_equals(patch.get_facecolor(), rgb_color)
g2 = ag.PairGrid(self.df)
g2.map_diag(kdeplot, color='red')
for ax in g2.diag_axes:
for line in ax.lines:
nt.assert_equals(line.get_color(), color)
@skipif(old_matplotlib)
def test_map_diag_palette(self):
pal = color_palette(n_colors=len(self.df.a.unique()))
g = ag.PairGrid(self.df, hue="a")
g.map_diag(kdeplot)
for ax in g.diag_axes:
for line, color in zip(ax.lines, pal):
nt.assert_equals(line.get_color(), color)
@skipif(old_matplotlib)
def test_map_diag_and_offdiag(self):
vars = ["x", "y", "z"]
g = ag.PairGrid(self.df)
g.map_offdiag(plt.scatter)
g.map_diag(plt.hist)
for ax in g.diag_axes:
nt.assert_equal(len(ax.patches), 10)
for i, j in zip(*np.triu_indices_from(g.axes, 1)):
ax = g.axes[i, j]
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
for i, j in zip(*np.tril_indices_from(g.axes, -1)):
ax = g.axes[i, j]
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
for i, j in zip(*np.diag_indices_from(g.axes)):
ax = g.axes[i, j]
nt.assert_equal(len(ax.collections), 0)
def test_palette(self):
rcmod.set()
g = ag.PairGrid(self.df, hue="a")
assert g.palette == color_palette(n_colors=len(self.df.a.unique()))
g = ag.PairGrid(self.df, hue="b")
assert g.palette == color_palette("husl", len(self.df.b.unique()))
g = ag.PairGrid(self.df, hue="a", palette="Set2")
assert g.palette == color_palette("Set2", len(self.df.a.unique()))
dict_pal = dict(a="red", b="green", c="blue")
list_pal = color_palette(["red", "green", "blue"])
g = ag.PairGrid(self.df, hue="a", palette=dict_pal)
assert g.palette == list_pal
list_pal = color_palette(["blue", "red", "green"])
g = ag.PairGrid(self.df, hue="a", hue_order=list("cab"),
palette=dict_pal)
assert g.palette == list_pal
def test_hue_kws(self):
kws = dict(marker=["o", "s", "d", "+"])
g = ag.PairGrid(self.df, hue="a", hue_kws=kws)
g.map(plt.plot)
for line, marker in zip(g.axes[0, 0].lines, kws["marker"]):
nt.assert_equal(line.get_marker(), marker)
g = ag.PairGrid(self.df, hue="a", hue_kws=kws,
hue_order=list("dcab"))
g.map(plt.plot)
for line, marker in zip(g.axes[0, 0].lines, kws["marker"]):
nt.assert_equal(line.get_marker(), marker)
@skipif(old_matplotlib)
def test_hue_order(self):
order = list("dcab")
g = ag.PairGrid(self.df, hue="a", hue_order=order)
g.map(plt.plot)
for line, level in zip(g.axes[1, 0].lines, order):
x, y = line.get_xydata().T
npt.assert_array_equal(x, self.df.loc[self.df.a == level, "x"])
npt.assert_array_equal(y, self.df.loc[self.df.a == level, "y"])
plt.close("all")
g = ag.PairGrid(self.df, hue="a", hue_order=order)
g.map_diag(plt.plot)
for line, level in zip(g.axes[0, 0].lines, order):
x, y = line.get_xydata().T
npt.assert_array_equal(x, self.df.loc[self.df.a == level, "x"])
npt.assert_array_equal(y, self.df.loc[self.df.a == level, "x"])
plt.close("all")
g = ag.PairGrid(self.df, hue="a", hue_order=order)
g.map_lower(plt.plot)
for line, level in zip(g.axes[1, 0].lines, order):
x, y = line.get_xydata().T
npt.assert_array_equal(x, self.df.loc[self.df.a == level, "x"])
npt.assert_array_equal(y, self.df.loc[self.df.a == level, "y"])
plt.close("all")
g = ag.PairGrid(self.df, hue="a", hue_order=order)
g.map_upper(plt.plot)
for line, level in zip(g.axes[0, 1].lines, order):
x, y = line.get_xydata().T
npt.assert_array_equal(x, self.df.loc[self.df.a == level, "y"])
npt.assert_array_equal(y, self.df.loc[self.df.a == level, "x"])
plt.close("all")
@skipif(old_matplotlib)
def test_hue_order_missing_level(self):
order = list("dcaeb")
g = ag.PairGrid(self.df, hue="a", hue_order=order)
g.map(plt.plot)
for line, level in zip(g.axes[1, 0].lines, order):
x, y = line.get_xydata().T
npt.assert_array_equal(x, self.df.loc[self.df.a == level, "x"])
npt.assert_array_equal(y, self.df.loc[self.df.a == level, "y"])
plt.close("all")
g = ag.PairGrid(self.df, hue="a", hue_order=order)
g.map_diag(plt.plot)
for line, level in zip(g.axes[0, 0].lines, order):
x, y = line.get_xydata().T
npt.assert_array_equal(x, self.df.loc[self.df.a == level, "x"])
npt.assert_array_equal(y, self.df.loc[self.df.a == level, "x"])
plt.close("all")
g = ag.PairGrid(self.df, hue="a", hue_order=order)
g.map_lower(plt.plot)
for line, level in zip(g.axes[1, 0].lines, order):
x, y = line.get_xydata().T
npt.assert_array_equal(x, self.df.loc[self.df.a == level, "x"])
npt.assert_array_equal(y, self.df.loc[self.df.a == level, "y"])
plt.close("all")
g = ag.PairGrid(self.df, hue="a", hue_order=order)
g.map_upper(plt.plot)
for line, level in zip(g.axes[0, 1].lines, order):
x, y = line.get_xydata().T
npt.assert_array_equal(x, self.df.loc[self.df.a == level, "y"])
npt.assert_array_equal(y, self.df.loc[self.df.a == level, "x"])
plt.close("all")
def test_nondefault_index(self):
df = self.df.copy().set_index("b")
vars = ["x", "y", "z"]
g1 = ag.PairGrid(df)
g1.map(plt.scatter)
for i, axes_i in enumerate(g1.axes):
for j, ax in enumerate(axes_i):
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
g2 = ag.PairGrid(df, "a")
g2.map(plt.scatter)
for i, axes_i in enumerate(g2.axes):
for j, ax in enumerate(axes_i):
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
for k, k_level in enumerate(self.df.a.unique()):
x_in_k = x_in[self.df.a == k_level]
y_in_k = y_in[self.df.a == k_level]
x_out, y_out = ax.collections[k].get_offsets().T
npt.assert_array_equal(x_in_k, x_out)
npt.assert_array_equal(y_in_k, y_out)
@skipif(old_matplotlib)
def test_pairplot(self):
vars = ["x", "y", "z"]
g = ag.pairplot(self.df)
for ax in g.diag_axes:
assert len(ax.patches) > 1
for i, j in zip(*np.triu_indices_from(g.axes, 1)):
ax = g.axes[i, j]
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
for i, j in zip(*np.tril_indices_from(g.axes, -1)):
ax = g.axes[i, j]
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
for i, j in zip(*np.diag_indices_from(g.axes)):
ax = g.axes[i, j]
nt.assert_equal(len(ax.collections), 0)
g = ag.pairplot(self.df, hue="a")
n = len(self.df.a.unique())
for ax in g.diag_axes:
assert len(ax.lines) == n
assert len(ax.collections) == n
@skipif(old_matplotlib)
def test_pairplot_reg(self):
vars = ["x", "y", "z"]
g = ag.pairplot(self.df, diag_kind="hist", kind="reg")
for ax in g.diag_axes:
nt.assert_equal(len(ax.patches), 10)
for i, j in zip(*np.triu_indices_from(g.axes, 1)):
ax = g.axes[i, j]
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
nt.assert_equal(len(ax.lines), 1)
nt.assert_equal(len(ax.collections), 2)
for i, j in zip(*np.tril_indices_from(g.axes, -1)):
ax = g.axes[i, j]
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
nt.assert_equal(len(ax.lines), 1)
nt.assert_equal(len(ax.collections), 2)
for i, j in zip(*np.diag_indices_from(g.axes)):
ax = g.axes[i, j]
nt.assert_equal(len(ax.collections), 0)
@skipif(old_matplotlib)
def test_pairplot_kde(self):
vars = ["x", "y", "z"]
g = ag.pairplot(self.df, diag_kind="kde")
for ax in g.diag_axes:
nt.assert_equal(len(ax.lines), 1)
for i, j in zip(*np.triu_indices_from(g.axes, 1)):
ax = g.axes[i, j]
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
for i, j in zip(*np.tril_indices_from(g.axes, -1)):
ax = g.axes[i, j]
x_in = self.df[vars[j]]
y_in = self.df[vars[i]]
x_out, y_out = ax.collections[0].get_offsets().T
npt.assert_array_equal(x_in, x_out)
npt.assert_array_equal(y_in, y_out)
for i, j in zip(*np.diag_indices_from(g.axes)):
ax = g.axes[i, j]
nt.assert_equal(len(ax.collections), 0)
@skipif(old_matplotlib)
def test_pairplot_markers(self):
vars = ["x", "y", "z"]
markers = ["o", "x", "s"]
g = ag.pairplot(self.df, hue="a", vars=vars, markers=markers)
assert g.hue_kws["marker"] == markers
plt.close("all")
with pytest.raises(ValueError):
g = ag.pairplot(self.df, hue="a", vars=vars, markers=markers[:-2])
class TestJointGrid(object):
rs = np.random.RandomState(sum(map(ord, "JointGrid")))
x = rs.randn(100)
y = rs.randn(100)
x_na = x.copy()
x_na[10] = np.nan
x_na[20] = np.nan
data = pd.DataFrame(dict(x=x, y=y, x_na=x_na))
def test_margin_grid_from_lists(self):
g = ag.JointGrid(self.x.tolist(), self.y.tolist())
npt.assert_array_equal(g.x, self.x)
npt.assert_array_equal(g.y, self.y)
def test_margin_grid_from_arrays(self):
g = ag.JointGrid(self.x, self.y)
npt.assert_array_equal(g.x, self.x)
npt.assert_array_equal(g.y, self.y)
def test_margin_grid_from_series(self):
g = ag.JointGrid(self.data.x, self.data.y)
npt.assert_array_equal(g.x, self.x)
npt.assert_array_equal(g.y, self.y)
def test_margin_grid_from_dataframe(self):
g = ag.JointGrid("x", "y", self.data)
npt.assert_array_equal(g.x, self.x)
npt.assert_array_equal(g.y, self.y)
def test_margin_grid_from_dataframe_bad_variable(self):
with nt.assert_raises(ValueError):
ag.JointGrid("x", "bad_column", self.data)
def test_margin_grid_axis_labels(self):
g = ag.JointGrid("x", "y", self.data)
xlabel, ylabel = g.ax_joint.get_xlabel(), g.ax_joint.get_ylabel()
nt.assert_equal(xlabel, "x")
nt.assert_equal(ylabel, "y")
g.set_axis_labels("x variable", "y variable")
xlabel, ylabel = g.ax_joint.get_xlabel(), g.ax_joint.get_ylabel()
nt.assert_equal(xlabel, "x variable")
nt.assert_equal(ylabel, "y variable")
def test_dropna(self):
g = ag.JointGrid("x_na", "y", self.data, dropna=False)
nt.assert_equal(len(g.x), len(self.x_na))
g = ag.JointGrid("x_na", "y", self.data, dropna=True)
nt.assert_equal(len(g.x), pd.notnull(self.x_na).sum())
def test_axlims(self):
lim = (-3, 3)
g = ag.JointGrid("x", "y", self.data, xlim=lim, ylim=lim)
nt.assert_equal(g.ax_joint.get_xlim(), lim)
nt.assert_equal(g.ax_joint.get_ylim(), lim)
nt.assert_equal(g.ax_marg_x.get_xlim(), lim)
nt.assert_equal(g.ax_marg_y.get_ylim(), lim)
def test_marginal_ticks(self):
g = ag.JointGrid("x", "y", self.data)
nt.assert_true(~len(g.ax_marg_x.get_xticks()))
nt.assert_true(~len(g.ax_marg_y.get_yticks()))
def test_bivariate_plot(self):
g = ag.JointGrid("x", "y", self.data)
g.plot_joint(plt.plot)
x, y = g.ax_joint.lines[0].get_xydata().T
npt.assert_array_equal(x, self.x)
npt.assert_array_equal(y, self.y)
def test_univariate_plot(self):
g = ag.JointGrid("x", "x", self.data)
g.plot_marginals(kdeplot)
_, y1 = g.ax_marg_x.lines[0].get_xydata().T
y2, _ = g.ax_marg_y.lines[0].get_xydata().T
npt.assert_array_equal(y1, y2)
def test_plot(self):
g = ag.JointGrid("x", "x", self.data)
g.plot(plt.plot, kdeplot)
x, y = g.ax_joint.lines[0].get_xydata().T
npt.assert_array_equal(x, self.x)
npt.assert_array_equal(y, self.x)
_, y1 = g.ax_marg_x.lines[0].get_xydata().T
y2, _ = g.ax_marg_y.lines[0].get_xydata().T
npt.assert_array_equal(y1, y2)
def test_annotate(self):
g = ag.JointGrid("x", "y", self.data)
rp = stats.pearsonr(self.x, self.y)
g.annotate(stats.pearsonr)
annotation = g.ax_joint.legend_.texts[0].get_text()
nt.assert_equal(annotation, "pearsonr = %.2g; p = %.2g" % rp)
g.annotate(stats.pearsonr, stat="correlation")
annotation = g.ax_joint.legend_.texts[0].get_text()
nt.assert_equal(annotation, "correlation = %.2g; p = %.2g" % rp)
def rsquared(x, y):
return stats.pearsonr(x, y)[0] ** 2
r2 = rsquared(self.x, self.y)
g.annotate(rsquared)
annotation = g.ax_joint.legend_.texts[0].get_text()
nt.assert_equal(annotation, "rsquared = %.2g" % r2)
template = "{stat} = {val:.3g} (p = {p:.3g})"
g.annotate(stats.pearsonr, template=template)
annotation = g.ax_joint.legend_.texts[0].get_text()
nt.assert_equal(annotation, template.format(stat="pearsonr",
val=rp[0], p=rp[1]))
def test_space(self):
g = ag.JointGrid("x", "y", self.data, space=0)
joint_bounds = g.ax_joint.bbox.bounds
marg_x_bounds = g.ax_marg_x.bbox.bounds
marg_y_bounds = g.ax_marg_y.bbox.bounds
nt.assert_equal(joint_bounds[2], marg_x_bounds[2])
nt.assert_equal(joint_bounds[3], marg_y_bounds[3])
class TestJointPlot(object):
rs = np.random.RandomState(sum(map(ord, "jointplot")))
x = rs.randn(100)
y = rs.randn(100)
data = pd.DataFrame(dict(x=x, y=y))
def test_scatter(self):
g = ag.jointplot("x", "y", self.data)
nt.assert_equal(len(g.ax_joint.collections), 1)
x, y = g.ax_joint.collections[0].get_offsets().T
npt.assert_array_equal(self.x, x)
npt.assert_array_equal(self.y, y)
x_bins = _freedman_diaconis_bins(self.x)
nt.assert_equal(len(g.ax_marg_x.patches), x_bins)
y_bins = _freedman_diaconis_bins(self.y)
nt.assert_equal(len(g.ax_marg_y.patches), y_bins)
def test_reg(self):
g = ag.jointplot("x", "y", self.data, kind="reg")
nt.assert_equal(len(g.ax_joint.collections), 2)
x, y = g.ax_joint.collections[0].get_offsets().T
npt.assert_array_equal(self.x, x)
npt.assert_array_equal(self.y, y)
x_bins = _freedman_diaconis_bins(self.x)
nt.assert_equal(len(g.ax_marg_x.patches), x_bins)
y_bins = _freedman_diaconis_bins(self.y)
nt.assert_equal(len(g.ax_marg_y.patches), y_bins)
nt.assert_equal(len(g.ax_joint.lines), 1)
nt.assert_equal(len(g.ax_marg_x.lines), 1)
nt.assert_equal(len(g.ax_marg_y.lines), 1)
def test_resid(self):
g = ag.jointplot("x", "y", self.data, kind="resid")
nt.assert_equal(len(g.ax_joint.collections), 1)
nt.assert_equal(len(g.ax_joint.lines), 1)
nt.assert_equal(len(g.ax_marg_x.lines), 0)
nt.assert_equal(len(g.ax_marg_y.lines), 1)
def test_hex(self):
g = ag.jointplot("x", "y", self.data, kind="hex")
nt.assert_equal(len(g.ax_joint.collections), 1)
x_bins = _freedman_diaconis_bins(self.x)
nt.assert_equal(len(g.ax_marg_x.patches), x_bins)
y_bins = _freedman_diaconis_bins(self.y)
nt.assert_equal(len(g.ax_marg_y.patches), y_bins)
def test_kde(self):
g = ag.jointplot("x", "y", self.data, kind="kde")
nt.assert_true(len(g.ax_joint.collections) > 0)
nt.assert_equal(len(g.ax_marg_x.collections), 1)
nt.assert_equal(len(g.ax_marg_y.collections), 1)
nt.assert_equal(len(g.ax_marg_x.lines), 1)
nt.assert_equal(len(g.ax_marg_y.lines), 1)
def test_color(self):
g = ag.jointplot("x", "y", self.data, color="purple")
purple = mpl.colors.colorConverter.to_rgb("purple")
scatter_color = g.ax_joint.collections[0].get_facecolor()[0, :3]
nt.assert_equal(tuple(scatter_color), purple)
hist_color = g.ax_marg_x.patches[0].get_facecolor()[:3]
nt.assert_equal(hist_color, purple)
def test_annotation(self):
g = ag.jointplot("x", "y", self.data, stat_func=stats.pearsonr)
nt.assert_equal(len(g.ax_joint.legend_.get_texts()), 1)
g = ag.jointplot("x", "y", self.data, stat_func=None)
nt.assert_is(g.ax_joint.legend_, None)
def test_hex_customise(self):
# test that default gridsize can be overridden
g = ag.jointplot("x", "y", self.data, kind="hex",
joint_kws=dict(gridsize=5))
nt.assert_equal(len(g.ax_joint.collections), 1)
a = g.ax_joint.collections[0].get_array()
nt.assert_equal(28, a.shape[0]) # 28 hexagons expected for gridsize 5
def test_bad_kind(self):
with nt.assert_raises(ValueError):
ag.jointplot("x", "y", self.data, kind="not_a_kind")
| bsd-3-clause |
mxlei01/healthcareai-py | healthcareai/tests/test_model_eval.py | 4 | 2285 | import unittest
import numpy as np
import pandas as pd
import sklearn
import healthcareai.common.model_eval as hcai_eval
from healthcareai.common.healthcareai_error import HealthcareAIError
import healthcareai.tests.helpers as test_helpers
class TestROC(unittest.TestCase):
def test_roc(self):
df = pd.DataFrame({'a': np.repeat(np.arange(.1, 1.1, .1), 10)})
b = np.repeat(0, 100)
b[[56, 62, 63, 68, 74, 75, 76, 81, 82, 84, 85, 87, 88] + list(range(90, 100))] = 1
df['b'] = b
# ROC_AUC
result = hcai_eval.compute_roc(df['b'], df['a'])
self.assertAlmostEqual(round(result['roc_auc'], 4), 0.9433)
self.assertAlmostEqual(round(result['best_true_positive_rate'], 4), 0.9565)
self.assertAlmostEqual(round(result['best_false_positive_rate'], 4), 0.2338)
class TestPR(unittest.TestCase):
def test_pr(self):
df = pd.DataFrame({'a': np.repeat(np.arange(.1, 1.1, .1), 10)})
b = np.repeat(0, 100)
b[[56, 62, 63, 68, 74, 75, 76, 81, 82, 84, 85, 87, 88] + list(range(90, 100))] = 1
df['b'] = b
# PR_AUC
out = hcai_eval.compute_pr(df['b'], df['a'])
test_helpers.assertBetween(self, 0.8, 0.87, out['pr_auc'])
self.assertAlmostEqual(round(out['best_precision'], 4), 0.8000)
self.assertAlmostEqual(round(out['best_recall'], 4), 0.6957)
class TestPlotRandomForestFeatureImportance(unittest.TestCase):
def test_raises_error_on_non_rf_estimator(self):
linear_regressor = sklearn.linear_model.LinearRegression()
self.assertRaises(
HealthcareAIError,
hcai_eval.plot_random_forest_feature_importance,
linear_regressor,
None,
None,
save=False)
class TestValidation(unittest.TestCase):
def test_same_length_predictions_and_labels(self):
self.assertTrue(hcai_eval._validate_predictions_and_labels_are_equal_length([0, 1, 2], [1, 2, 3]))
def test_different_length_predictions_and_labels_raises_error(self):
self.assertRaises(
HealthcareAIError,
hcai_eval._validate_predictions_and_labels_are_equal_length,
[0, 1, 2],
[0, 1, 2, 3, 4])
if __name__ == '__main__':
unittest.main()
| mit |
kylerbrown/scikit-learn | examples/decomposition/plot_pca_vs_fa_model_selection.py | 142 | 4467 | """
===============================================================
Model selection with Probabilistic PCA and Factor Analysis (FA)
===============================================================
Probabilistic PCA and Factor Analysis are probabilistic models.
The consequence is that the likelihood of new data can be used
for model selection and covariance estimation.
Here we compare PCA and FA with cross-validation on low rank data corrupted
with homoscedastic noise (noise variance
is the same for each feature) or heteroscedastic noise (noise variance
is the different for each feature). In a second step we compare the model
likelihood to the likelihoods obtained from shrinkage covariance estimators.
One can observe that with homoscedastic noise both FA and PCA succeed
in recovering the size of the low rank subspace. The likelihood with PCA
is higher than FA in this case. However PCA fails and overestimates
the rank when heteroscedastic noise is present. Under appropriate
circumstances the low rank models are more likely than shrinkage models.
The automatic estimation from
Automatic Choice of Dimensionality for PCA. NIPS 2000: 598-604
by Thomas P. Minka is also compared.
"""
print(__doc__)
# Authors: Alexandre Gramfort
# Denis A. Engemann
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from scipy import linalg
from sklearn.decomposition import PCA, FactorAnalysis
from sklearn.covariance import ShrunkCovariance, LedoitWolf
from sklearn.cross_validation import cross_val_score
from sklearn.grid_search import GridSearchCV
###############################################################################
# Create the data
n_samples, n_features, rank = 1000, 50, 10
sigma = 1.
rng = np.random.RandomState(42)
U, _, _ = linalg.svd(rng.randn(n_features, n_features))
X = np.dot(rng.randn(n_samples, rank), U[:, :rank].T)
# Adding homoscedastic noise
X_homo = X + sigma * rng.randn(n_samples, n_features)
# Adding heteroscedastic noise
sigmas = sigma * rng.rand(n_features) + sigma / 2.
X_hetero = X + rng.randn(n_samples, n_features) * sigmas
###############################################################################
# Fit the models
n_components = np.arange(0, n_features, 5) # options for n_components
def compute_scores(X):
pca = PCA()
fa = FactorAnalysis()
pca_scores, fa_scores = [], []
for n in n_components:
pca.n_components = n
fa.n_components = n
pca_scores.append(np.mean(cross_val_score(pca, X)))
fa_scores.append(np.mean(cross_val_score(fa, X)))
return pca_scores, fa_scores
def shrunk_cov_score(X):
shrinkages = np.logspace(-2, 0, 30)
cv = GridSearchCV(ShrunkCovariance(), {'shrinkage': shrinkages})
return np.mean(cross_val_score(cv.fit(X).best_estimator_, X))
def lw_score(X):
return np.mean(cross_val_score(LedoitWolf(), X))
for X, title in [(X_homo, 'Homoscedastic Noise'),
(X_hetero, 'Heteroscedastic Noise')]:
pca_scores, fa_scores = compute_scores(X)
n_components_pca = n_components[np.argmax(pca_scores)]
n_components_fa = n_components[np.argmax(fa_scores)]
pca = PCA(n_components='mle')
pca.fit(X)
n_components_pca_mle = pca.n_components_
print("best n_components by PCA CV = %d" % n_components_pca)
print("best n_components by FactorAnalysis CV = %d" % n_components_fa)
print("best n_components by PCA MLE = %d" % n_components_pca_mle)
plt.figure()
plt.plot(n_components, pca_scores, 'b', label='PCA scores')
plt.plot(n_components, fa_scores, 'r', label='FA scores')
plt.axvline(rank, color='g', label='TRUTH: %d' % rank, linestyle='-')
plt.axvline(n_components_pca, color='b',
label='PCA CV: %d' % n_components_pca, linestyle='--')
plt.axvline(n_components_fa, color='r',
label='FactorAnalysis CV: %d' % n_components_fa, linestyle='--')
plt.axvline(n_components_pca_mle, color='k',
label='PCA MLE: %d' % n_components_pca_mle, linestyle='--')
# compare with other covariance estimators
plt.axhline(shrunk_cov_score(X), color='violet',
label='Shrunk Covariance MLE', linestyle='-.')
plt.axhline(lw_score(X), color='orange',
label='LedoitWolf MLE' % n_components_pca_mle, linestyle='-.')
plt.xlabel('nb of components')
plt.ylabel('CV scores')
plt.legend(loc='lower right')
plt.title(title)
plt.show()
| bsd-3-clause |
cogmission/nupic | external/linux32/lib/python2.6/site-packages/matplotlib/backends/backend_macosx.py | 69 | 15397 | from __future__ import division
import os
import numpy
from matplotlib._pylab_helpers import Gcf
from matplotlib.backend_bases import RendererBase, GraphicsContextBase,\
FigureManagerBase, FigureCanvasBase, NavigationToolbar2
from matplotlib.cbook import maxdict
from matplotlib.figure import Figure
from matplotlib.path import Path
from matplotlib.mathtext import MathTextParser
from matplotlib.colors import colorConverter
from matplotlib.widgets import SubplotTool
import matplotlib
from matplotlib.backends import _macosx
def show():
"""Show all the figures and enter the Cocoa mainloop.
This function will not return until all windows are closed or
the interpreter exits."""
# Having a Python-level function "show" wrapping the built-in
# function "show" in the _macosx extension module allows us to
# to add attributes to "show". This is something ipython does.
_macosx.show()
class RendererMac(RendererBase):
"""
The renderer handles drawing/rendering operations. Most of the renderer's
methods forwards the command to the renderer's graphics context. The
renderer does not wrap a C object and is written in pure Python.
"""
texd = maxdict(50) # a cache of tex image rasters
def __init__(self, dpi, width, height):
RendererBase.__init__(self)
self.dpi = dpi
self.width = width
self.height = height
self.gc = GraphicsContextMac()
self.mathtext_parser = MathTextParser('MacOSX')
def set_width_height (self, width, height):
self.width, self.height = width, height
def draw_path(self, gc, path, transform, rgbFace=None):
if rgbFace is not None:
rgbFace = tuple(rgbFace)
if gc!=self.gc:
n = self.gc.level() - gc.level()
for i in range(n): self.gc.restore()
self.gc = gc
gc.draw_path(path, transform, rgbFace)
def draw_markers(self, gc, marker_path, marker_trans, path, trans, rgbFace=None):
if rgbFace is not None:
rgbFace = tuple(rgbFace)
if gc!=self.gc:
n = self.gc.level() - gc.level()
for i in range(n): self.gc.restore()
self.gc = gc
gc.draw_markers(marker_path, marker_trans, path, trans, rgbFace)
def draw_path_collection(self, *args):
gc = self.gc
args = args[:13]
gc.draw_path_collection(*args)
def draw_quad_mesh(self, *args):
gc = self.gc
gc.draw_quad_mesh(*args)
def new_gc(self):
self.gc.reset()
return self.gc
def draw_image(self, x, y, im, bbox, clippath=None, clippath_trans=None):
im.flipud_out()
nrows, ncols, data = im.as_rgba_str()
self.gc.draw_image(x, y, nrows, ncols, data, bbox, clippath, clippath_trans)
im.flipud_out()
def draw_tex(self, gc, x, y, s, prop, angle):
if gc!=self.gc:
n = self.gc.level() - gc.level()
for i in range(n): self.gc.restore()
self.gc = gc
# todo, handle props, angle, origins
size = prop.get_size_in_points()
texmanager = self.get_texmanager()
key = s, size, self.dpi, angle, texmanager.get_font_config()
im = self.texd.get(key) # Not sure what this does; just copied from backend_agg.py
if im is None:
Z = texmanager.get_grey(s, size, self.dpi)
Z = numpy.array(255.0 - Z * 255.0, numpy.uint8)
gc.draw_mathtext(x, y, angle, Z)
def _draw_mathtext(self, gc, x, y, s, prop, angle):
if gc!=self.gc:
n = self.gc.level() - gc.level()
for i in range(n): self.gc.restore()
self.gc = gc
size = prop.get_size_in_points()
ox, oy, width, height, descent, image, used_characters = \
self.mathtext_parser.parse(s, self.dpi, prop)
gc.draw_mathtext(x, y, angle, 255 - image.as_array())
def draw_text(self, gc, x, y, s, prop, angle, ismath=False):
if gc!=self.gc:
n = self.gc.level() - gc.level()
for i in range(n): self.gc.restore()
self.gc = gc
if ismath:
self._draw_mathtext(gc, x, y, s, prop, angle)
else:
family = prop.get_family()
size = prop.get_size_in_points()
weight = prop.get_weight()
style = prop.get_style()
gc.draw_text(x, y, unicode(s), family, size, weight, style, angle)
def get_text_width_height_descent(self, s, prop, ismath):
if ismath=='TeX':
# TODO: handle props
size = prop.get_size_in_points()
texmanager = self.get_texmanager()
Z = texmanager.get_grey(s, size, self.dpi)
m,n = Z.shape
# TODO: handle descent; This is based on backend_agg.py
return n, m, 0
if ismath:
ox, oy, width, height, descent, fonts, used_characters = \
self.mathtext_parser.parse(s, self.dpi, prop)
return width, height, descent
family = prop.get_family()
size = prop.get_size_in_points()
weight = prop.get_weight()
style = prop.get_style()
return self.gc.get_text_width_height_descent(unicode(s), family, size, weight, style)
def flipy(self):
return False
def points_to_pixels(self, points):
return points/72.0 * self.dpi
def option_image_nocomposite(self):
return True
class GraphicsContextMac(_macosx.GraphicsContext, GraphicsContextBase):
"""
The GraphicsContext wraps a Quartz graphics context. All methods
are implemented at the C-level in macosx.GraphicsContext. These
methods set drawing properties such as the line style, fill color,
etc. The actual drawing is done by the Renderer, which draws into
the GraphicsContext.
"""
def __init__(self):
GraphicsContextBase.__init__(self)
_macosx.GraphicsContext.__init__(self)
def set_foreground(self, fg, isRGB=False):
if not isRGB:
fg = colorConverter.to_rgb(fg)
_macosx.GraphicsContext.set_foreground(self, fg)
def set_clip_rectangle(self, box):
GraphicsContextBase.set_clip_rectangle(self, box)
if not box: return
_macosx.GraphicsContext.set_clip_rectangle(self, box.bounds)
def set_clip_path(self, path):
GraphicsContextBase.set_clip_path(self, path)
if not path: return
path = path.get_fully_transformed_path()
_macosx.GraphicsContext.set_clip_path(self, path)
########################################################################
#
# The following functions and classes are for pylab and implement
# window/figure managers, etc...
#
########################################################################
def draw_if_interactive():
"""
For performance reasons, we don't want to redraw the figure after
each draw command. Instead, we mark the figure as invalid, so that
it will be redrawn as soon as the event loop resumes via PyOS_InputHook.
This function should be called after each draw event, even if
matplotlib is not running interactively.
"""
figManager = Gcf.get_active()
if figManager is not None:
figManager.canvas.invalidate()
def new_figure_manager(num, *args, **kwargs):
"""
Create a new figure manager instance
"""
FigureClass = kwargs.pop('FigureClass', Figure)
figure = FigureClass(*args, **kwargs)
canvas = FigureCanvasMac(figure)
manager = FigureManagerMac(canvas, num)
return manager
class FigureCanvasMac(_macosx.FigureCanvas, FigureCanvasBase):
"""
The canvas the figure renders into. Calls the draw and print fig
methods, creates the renderers, etc...
Public attribute
figure - A Figure instance
Events such as button presses, mouse movements, and key presses
are handled in the C code and the base class methods
button_press_event, button_release_event, motion_notify_event,
key_press_event, and key_release_event are called from there.
"""
def __init__(self, figure):
FigureCanvasBase.__init__(self, figure)
width, height = self.get_width_height()
self.renderer = RendererMac(figure.dpi, width, height)
_macosx.FigureCanvas.__init__(self, width, height)
def resize(self, width, height):
self.renderer.set_width_height(width, height)
dpi = self.figure.dpi
width /= dpi
height /= dpi
self.figure.set_size_inches(width, height)
def print_figure(self, filename, dpi=None, facecolor='w', edgecolor='w',
orientation='portrait', **kwargs):
if dpi is None: dpi = matplotlib.rcParams['savefig.dpi']
filename = unicode(filename)
root, ext = os.path.splitext(filename)
ext = ext[1:].lower()
if not ext:
ext = "png"
filename = root + "." + ext
if ext=="jpg": ext = "jpeg"
# save the figure settings
origfacecolor = self.figure.get_facecolor()
origedgecolor = self.figure.get_edgecolor()
# set the new parameters
self.figure.set_facecolor(facecolor)
self.figure.set_edgecolor(edgecolor)
if ext in ('jpeg', 'png', 'tiff', 'gif', 'bmp'):
width, height = self.figure.get_size_inches()
width, height = width*dpi, height*dpi
self.write_bitmap(filename, width, height)
elif ext == 'pdf':
self.write_pdf(filename)
elif ext in ('ps', 'eps'):
from backend_ps import FigureCanvasPS
# Postscript backend changes figure.dpi, but doesn't change it back
origDPI = self.figure.dpi
fc = self.switch_backends(FigureCanvasPS)
fc.print_figure(filename, dpi, facecolor, edgecolor,
orientation, **kwargs)
self.figure.dpi = origDPI
self.figure.set_canvas(self)
elif ext=='svg':
from backend_svg import FigureCanvasSVG
fc = self.switch_backends(FigureCanvasSVG)
fc.print_figure(filename, dpi, facecolor, edgecolor,
orientation, **kwargs)
self.figure.set_canvas(self)
else:
raise ValueError("Figure format not available (extension %s)" % ext)
# restore original figure settings
self.figure.set_facecolor(origfacecolor)
self.figure.set_edgecolor(origedgecolor)
class FigureManagerMac(_macosx.FigureManager, FigureManagerBase):
"""
Wrap everything up into a window for the pylab interface
"""
def __init__(self, canvas, num):
FigureManagerBase.__init__(self, canvas, num)
title = "Figure %d" % num
_macosx.FigureManager.__init__(self, canvas, title)
if matplotlib.rcParams['toolbar']=='classic':
self.toolbar = NavigationToolbarMac(canvas)
elif matplotlib.rcParams['toolbar']=='toolbar2':
self.toolbar = NavigationToolbar2Mac(canvas)
else:
self.toolbar = None
if self.toolbar is not None:
self.toolbar.update()
def notify_axes_change(fig):
'this will be called whenever the current axes is changed'
if self.toolbar != None: self.toolbar.update()
self.canvas.figure.add_axobserver(notify_axes_change)
# This is ugly, but this is what tkagg and gtk are doing.
# It is needed to get ginput() working.
self.canvas.figure.show = lambda *args: self.show()
def show(self):
self.canvas.draw()
def close(self):
Gcf.destroy(self.num)
class NavigationToolbarMac(_macosx.NavigationToolbar):
def __init__(self, canvas):
self.canvas = canvas
basedir = os.path.join(matplotlib.rcParams['datapath'], "images")
images = {}
for imagename in ("stock_left",
"stock_right",
"stock_up",
"stock_down",
"stock_zoom-in",
"stock_zoom-out",
"stock_save_as"):
filename = os.path.join(basedir, imagename+".ppm")
images[imagename] = self._read_ppm_image(filename)
_macosx.NavigationToolbar.__init__(self, images)
self.message = None
def _read_ppm_image(self, filename):
data = ""
imagefile = open(filename)
for line in imagefile:
if "#" in line:
i = line.index("#")
line = line[:i] + "\n"
data += line
imagefile.close()
magic, width, height, maxcolor, imagedata = data.split(None, 4)
width, height = int(width), int(height)
assert magic=="P6"
assert len(imagedata)==width*height*3 # 3 colors in RGB
return (width, height, imagedata)
def panx(self, direction):
axes = self.canvas.figure.axes
selected = self.get_active()
for i in selected:
axes[i].xaxis.pan(direction)
self.canvas.invalidate()
def pany(self, direction):
axes = self.canvas.figure.axes
selected = self.get_active()
for i in selected:
axes[i].yaxis.pan(direction)
self.canvas.invalidate()
def zoomx(self, direction):
axes = self.canvas.figure.axes
selected = self.get_active()
for i in selected:
axes[i].xaxis.zoom(direction)
self.canvas.invalidate()
def zoomy(self, direction):
axes = self.canvas.figure.axes
selected = self.get_active()
for i in selected:
axes[i].yaxis.zoom(direction)
self.canvas.invalidate()
def save_figure(self):
filename = _macosx.choose_save_file('Save the figure')
if filename is None: # Cancel
return
self.canvas.print_figure(filename)
class NavigationToolbar2Mac(_macosx.NavigationToolbar2, NavigationToolbar2):
def __init__(self, canvas):
NavigationToolbar2.__init__(self, canvas)
def _init_toolbar(self):
basedir = os.path.join(matplotlib.rcParams['datapath'], "images")
_macosx.NavigationToolbar2.__init__(self, basedir)
def draw_rubberband(self, event, x0, y0, x1, y1):
self.canvas.set_rubberband(x0, y0, x1, y1)
def release(self, event):
self.canvas.remove_rubberband()
def set_cursor(self, cursor):
_macosx.set_cursor(cursor)
def save_figure(self):
filename = _macosx.choose_save_file('Save the figure')
if filename is None: # Cancel
return
self.canvas.print_figure(filename)
def prepare_configure_subplots(self):
toolfig = Figure(figsize=(6,3))
canvas = FigureCanvasMac(toolfig)
toolfig.subplots_adjust(top=0.9)
tool = SubplotTool(self.canvas.figure, toolfig)
return canvas
def set_message(self, message):
_macosx.NavigationToolbar2.set_message(self, message.encode('utf-8'))
########################################################################
#
# Now just provide the standard names that backend.__init__ is expecting
#
########################################################################
FigureManager = FigureManagerMac
| agpl-3.0 |
hrjn/scikit-learn | examples/cluster/plot_affinity_propagation.py | 349 | 2304 | """
=================================================
Demo of affinity propagation clustering algorithm
=================================================
Reference:
Brendan J. Frey and Delbert Dueck, "Clustering by Passing Messages
Between Data Points", Science Feb. 2007
"""
print(__doc__)
from sklearn.cluster import AffinityPropagation
from sklearn import metrics
from sklearn.datasets.samples_generator import make_blobs
##############################################################################
# Generate sample data
centers = [[1, 1], [-1, -1], [1, -1]]
X, labels_true = make_blobs(n_samples=300, centers=centers, cluster_std=0.5,
random_state=0)
##############################################################################
# Compute Affinity Propagation
af = AffinityPropagation(preference=-50).fit(X)
cluster_centers_indices = af.cluster_centers_indices_
labels = af.labels_
n_clusters_ = len(cluster_centers_indices)
print('Estimated number of clusters: %d' % n_clusters_)
print("Homogeneity: %0.3f" % metrics.homogeneity_score(labels_true, labels))
print("Completeness: %0.3f" % metrics.completeness_score(labels_true, labels))
print("V-measure: %0.3f" % metrics.v_measure_score(labels_true, labels))
print("Adjusted Rand Index: %0.3f"
% metrics.adjusted_rand_score(labels_true, labels))
print("Adjusted Mutual Information: %0.3f"
% metrics.adjusted_mutual_info_score(labels_true, labels))
print("Silhouette Coefficient: %0.3f"
% metrics.silhouette_score(X, labels, metric='sqeuclidean'))
##############################################################################
# Plot result
import matplotlib.pyplot as plt
from itertools import cycle
plt.close('all')
plt.figure(1)
plt.clf()
colors = cycle('bgrcmykbgrcmykbgrcmykbgrcmyk')
for k, col in zip(range(n_clusters_), colors):
class_members = labels == k
cluster_center = X[cluster_centers_indices[k]]
plt.plot(X[class_members, 0], X[class_members, 1], col + '.')
plt.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col,
markeredgecolor='k', markersize=14)
for x in X[class_members]:
plt.plot([cluster_center[0], x[0]], [cluster_center[1], x[1]], col)
plt.title('Estimated number of clusters: %d' % n_clusters_)
plt.show()
| bsd-3-clause |
befelix/SafeMDP | examples/mars/mars_utilities.py | 1 | 8568 | from safemdp.grid_world import *
from osgeo import gdal
from scipy import interpolate
import numpy as np
import os
import matplotlib.pyplot as plt
import GPy
__all__ = ['mars_map', 'initialize_SafeMDP_object', 'performance_metrics']
def mars_map(plot_map=False, interpolation=False):
"""
Extract the map for the simulation from the HiRISE data. If the HiRISE
data is not in the current folder it will be downloaded and converted to
GeoTiff extension with gdal.
Parameters
----------
plot_map: bool
If true plots the map that will be used for exploration
interpolation: bool
If true the data of the map will be interpolated with splines to
obtain a finer grid
Returns
-------
altitudes: np.array
1-d vector with altitudes for each node
coord: np.array
Coordinate of the map we use for exploration
world_shape: tuple
Size of the grid world (rows, columns)
step_size: tuple
Step size for the grid (row, column)
num_of_points: int
Interpolation parameter. Indicates the scaling factor for the
original step size
"""
# Define the dimension of the map we want to investigate and its resolution
world_shape = (120, 70)
step_size = (1., 1.)
# Download and convert to GEOtiff Mars data
if not os.path.exists('./mars.tif'):
if not os.path.exists("./mars.IMG"):
import urllib
print('Downloading MARS map, this make take a while...')
# Download the IMG file
urllib.urlretrieve(
"http://www.uahirise.org/PDS/DTM/PSP/ORB_010200_010299"
"/PSP_010228_1490_ESP_016320_1490"
"/DTEEC_010228_1490_016320_1490_A01.IMG", "mars.IMG")
# Convert to tif
print('Converting map to geotif...')
os.system("gdal_translate -of GTiff ./mars.IMG ./mars.tif")
print('Done')
# Read the data with gdal module
gdal.UseExceptions()
ds = gdal.Open("./mars.tif")
band = ds.GetRasterBand(1)
elevation = band.ReadAsArray()
# Extract the area of interest
startX = 2890
startY = 1955
altitudes = np.copy(elevation[startX:startX + world_shape[0],
startY:startY + world_shape[1]])
# Center the data
mean_val = (np.max(altitudes) + np.min(altitudes)) / 2.
altitudes[:] = altitudes - mean_val
# Define coordinates
n, m = world_shape
step1, step2 = step_size
xx, yy = np.meshgrid(np.linspace(0, (n - 1) * step1, n),
np.linspace(0, (m - 1) * step2, m), indexing="ij")
coord = np.vstack((xx.flatten(), yy.flatten())).T
# Interpolate data
if interpolation:
# Interpolating function
spline_interpolator = interpolate.RectBivariateSpline(
np.linspace(0, (n - 1) * step1, n),
np.linspace(0, (m - 1) * step1, m), altitudes)
# New size and resolution
num_of_points = 1
world_shape = tuple([(x - 1) * num_of_points + 1 for x in world_shape])
step_size = tuple([x / num_of_points for x in step_size])
# New coordinates and altitudes
n, m = world_shape
step1, step2 = step_size
xx, yy = np.meshgrid(np.linspace(0, (n - 1) * step1, n),
np.linspace(0, (m - 1) * step2, m), indexing="ij")
coord = np.vstack((xx.flatten(), yy.flatten())).T
altitudes = spline_interpolator(np.linspace(0, (n - 1) * step1, n),
np.linspace(0, (m - 1) * step2, m))
else:
num_of_points = 1
# Plot area
if plot_map:
plt.imshow(altitudes.T, origin="lower", interpolation="nearest")
plt.colorbar()
plt.show()
altitudes = altitudes.flatten()
return altitudes, coord, world_shape, step_size, num_of_points
def initialize_SafeMDP_object(altitudes, coord, world_shape, step_size, L=0.2,
beta=2, length=14.5, sigma_n=0.075, start_x=60,
start_y=61):
"""
Parameters
----------
altitudes: np.array
1-d vector with altitudes for each node
coord: np.array
Coordinate of the map we use for exploration
world_shape: tuple
Size of the grid world (rows, columns)
step_size: tuple
Step size for the grid (row, column)
L: float
Lipschitz constant to compute expanders
beta: float
Scaling factor for confidence intervals
length: float
Lengthscale for Matern kernel
sigma_n:
Standard deviation for gaussian noise
start_x: int
x coordinate of the starting point
start_y:
y coordinate of the starting point
Returns
-------
start: int
Node number of initial state
x: SafeMDP
Instance of the SafeMDP class for the mars exploration problem
true_S_hat: np.array
True S_hat if safety feature is known with no error and h_hard is used
true_S_hat_epsilon: np.array
True S_hat if safety feature is known up to epsilon and h is used
h_hard: float
True safety thrshold. It can be different from the safety threshold
used for classification in case the agent needs to use extra caution
(in our experiments h=25 deg, h_har=30 deg)
"""
# Safety threshold
h = -np.tan(np.pi / 9. + np.pi / 36.) * step_size[0]
#Initial node
start = start_x * world_shape[1] + start_y
# Initial safe sets
S_hat0 = compute_S_hat0(start, world_shape, 4, altitudes,
step_size, h)
S0 = np.copy(S_hat0)
S0[:, 0] = True
# Initialize GP
X = coord[start, :].reshape(1, 2)
Y = altitudes[start].reshape(1, 1)
kernel = GPy.kern.Matern52(input_dim=2, lengthscale=length, variance=100.)
lik = GPy.likelihoods.Gaussian(variance=sigma_n ** 2)
gp = GPy.core.GP(X, Y, kernel, lik)
# Define SafeMDP object
x = GridWorld(gp, world_shape, step_size, beta, altitudes, h, S0,
S_hat0, L, update_dist=25)
# Add samples about actions from starting node
for i in range(5):
x.add_observation(start, 1)
x.add_observation(start, 2)
x.add_observation(start, 3)
x.add_observation(start, 4)
x.gp.set_XY(X=x.gp.X[1:, :], Y=x.gp.Y[1:, :]) # Necessary for results as in
# paper
# True safe set for false safe
h_hard = -np.tan(np.pi / 6.) * step_size[0]
true_S = compute_true_safe_set(x.world_shape, x.altitudes, h_hard)
true_S_hat = compute_true_S_hat(x.graph, true_S, x.initial_nodes)
# True safe set for completeness
epsilon = sigma_n * beta
true_S_epsilon = compute_true_safe_set(x.world_shape, x.altitudes,
x.h + epsilon)
true_S_hat_epsilon = compute_true_S_hat(x.graph, true_S_epsilon,
x.initial_nodes)
return start, x, true_S_hat, true_S_hat_epsilon, h_hard
def performance_metrics(path, x, true_S_hat_epsilon, true_S_hat, h_hard):
"""
Parameters
----------
path: np.array
Nodes of the shortest safe path
x: SafeMDP
Instance of the SafeMDP class for the mars exploration problem
true_S_hat_epsilon: np.array
True S_hat if safety feature is known up to epsilon and h is used
true_S_hat: np.array
True S_hat if safety feature is known with no error and h_hard is used
h_hard: float
True safety thrshold. It can be different from the safety threshold
used for classification in case the agent needs to use extra caution
(in our experiments h=25 deg, h_har=30 deg)
Returns
-------
unsafe_transitions: int
Number of unsafe transitions along the path
coverage: float
Percentage of coverage of true_S_hat_epsilon
false_safe: int
Number of misclassifications (classifing something as safe when it
acutally is unsafe according to h_hard )
"""
# Count unsafe transitions along the path
path_altitudes = x.altitudes[path]
unsafe_transitions = np.sum(-np.diff(path_altitudes) < h_hard)
# Coverage
max_size = float(np.count_nonzero(true_S_hat_epsilon))
coverage = 100 * np.count_nonzero(np.logical_and(x.S_hat,
true_S_hat_epsilon))/max_size
# False safe
false_safe = np.count_nonzero(np.logical_and(x.S_hat, ~true_S_hat))
return unsafe_transitions, coverage, false_safe
| mit |
nhejazi/scikit-learn | sklearn/metrics/tests/test_common.py | 8 | 43668 | from __future__ import division, print_function
from functools import partial
from itertools import product
import numpy as np
import scipy.sparse as sp
from sklearn.datasets import make_multilabel_classification
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils.multiclass import type_of_target
from sklearn.utils.validation import check_random_state
from sklearn.utils import shuffle
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_greater
from sklearn.utils.testing import assert_not_equal
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_raise_message
from sklearn.utils.testing import assert_true
from sklearn.utils.testing import ignore_warnings
from sklearn.utils.testing import _named_check
from sklearn.metrics import accuracy_score
from sklearn.metrics import average_precision_score
from sklearn.metrics import brier_score_loss
from sklearn.metrics import cohen_kappa_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import coverage_error
from sklearn.metrics import explained_variance_score
from sklearn.metrics import f1_score
from sklearn.metrics import fbeta_score
from sklearn.metrics import hamming_loss
from sklearn.metrics import hinge_loss
from sklearn.metrics import jaccard_similarity_score
from sklearn.metrics import label_ranking_average_precision_score
from sklearn.metrics import label_ranking_loss
from sklearn.metrics import log_loss
from sklearn.metrics import matthews_corrcoef
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_squared_error
from sklearn.metrics import median_absolute_error
from sklearn.metrics import precision_score
from sklearn.metrics import r2_score
from sklearn.metrics import recall_score
from sklearn.metrics import roc_auc_score
from sklearn.metrics import zero_one_loss
# TODO Curve are currently not covered by invariance test
# from sklearn.metrics import precision_recall_curve
# from sklearn.metrics import roc_curve
from sklearn.metrics.base import _average_binary_score
# Note toward developers about metric testing
# -------------------------------------------
# It is often possible to write one general test for several metrics:
#
# - invariance properties, e.g. invariance to sample order
# - common behavior for an argument, e.g. the "normalize" with value True
# will return the mean of the metrics and with value False will return
# the sum of the metrics.
#
# In order to improve the overall metric testing, it is a good idea to write
# first a specific test for the given metric and then add a general test for
# all metrics that have the same behavior.
#
# Two types of datastructures are used in order to implement this system:
# dictionaries of metrics and lists of metrics wit common properties.
#
# Dictionaries of metrics
# ------------------------
# The goal of having those dictionaries is to have an easy way to call a
# particular metric and associate a name to each function:
#
# - REGRESSION_METRICS: all regression metrics.
# - CLASSIFICATION_METRICS: all classification metrics
# which compare a ground truth and the estimated targets as returned by a
# classifier.
# - THRESHOLDED_METRICS: all classification metrics which
# compare a ground truth and a score, e.g. estimated probabilities or
# decision function (format might vary)
#
# Those dictionaries will be used to test systematically some invariance
# properties, e.g. invariance toward several input layout.
#
REGRESSION_METRICS = {
"mean_absolute_error": mean_absolute_error,
"mean_squared_error": mean_squared_error,
"median_absolute_error": median_absolute_error,
"explained_variance_score": explained_variance_score,
"r2_score": partial(r2_score, multioutput='variance_weighted'),
}
CLASSIFICATION_METRICS = {
"accuracy_score": accuracy_score,
"unnormalized_accuracy_score": partial(accuracy_score, normalize=False),
"confusion_matrix": confusion_matrix,
"hamming_loss": hamming_loss,
"jaccard_similarity_score": jaccard_similarity_score,
"unnormalized_jaccard_similarity_score":
partial(jaccard_similarity_score, normalize=False),
"zero_one_loss": zero_one_loss,
"unnormalized_zero_one_loss": partial(zero_one_loss, normalize=False),
# These are needed to test averaging
"precision_score": precision_score,
"recall_score": recall_score,
"f1_score": f1_score,
"f2_score": partial(fbeta_score, beta=2),
"f0.5_score": partial(fbeta_score, beta=0.5),
"matthews_corrcoef_score": matthews_corrcoef,
"weighted_f0.5_score": partial(fbeta_score, average="weighted", beta=0.5),
"weighted_f1_score": partial(f1_score, average="weighted"),
"weighted_f2_score": partial(fbeta_score, average="weighted", beta=2),
"weighted_precision_score": partial(precision_score, average="weighted"),
"weighted_recall_score": partial(recall_score, average="weighted"),
"micro_f0.5_score": partial(fbeta_score, average="micro", beta=0.5),
"micro_f1_score": partial(f1_score, average="micro"),
"micro_f2_score": partial(fbeta_score, average="micro", beta=2),
"micro_precision_score": partial(precision_score, average="micro"),
"micro_recall_score": partial(recall_score, average="micro"),
"macro_f0.5_score": partial(fbeta_score, average="macro", beta=0.5),
"macro_f1_score": partial(f1_score, average="macro"),
"macro_f2_score": partial(fbeta_score, average="macro", beta=2),
"macro_precision_score": partial(precision_score, average="macro"),
"macro_recall_score": partial(recall_score, average="macro"),
"samples_f0.5_score": partial(fbeta_score, average="samples", beta=0.5),
"samples_f1_score": partial(f1_score, average="samples"),
"samples_f2_score": partial(fbeta_score, average="samples", beta=2),
"samples_precision_score": partial(precision_score, average="samples"),
"samples_recall_score": partial(recall_score, average="samples"),
"cohen_kappa_score": cohen_kappa_score,
}
THRESHOLDED_METRICS = {
"coverage_error": coverage_error,
"label_ranking_loss": label_ranking_loss,
"log_loss": log_loss,
"unnormalized_log_loss": partial(log_loss, normalize=False),
"hinge_loss": hinge_loss,
"brier_score_loss": brier_score_loss,
"roc_auc_score": roc_auc_score,
"weighted_roc_auc": partial(roc_auc_score, average="weighted"),
"samples_roc_auc": partial(roc_auc_score, average="samples"),
"micro_roc_auc": partial(roc_auc_score, average="micro"),
"macro_roc_auc": partial(roc_auc_score, average="macro"),
"average_precision_score": average_precision_score,
"weighted_average_precision_score":
partial(average_precision_score, average="weighted"),
"samples_average_precision_score":
partial(average_precision_score, average="samples"),
"micro_average_precision_score":
partial(average_precision_score, average="micro"),
"macro_average_precision_score":
partial(average_precision_score, average="macro"),
"label_ranking_average_precision_score":
label_ranking_average_precision_score,
}
ALL_METRICS = dict()
ALL_METRICS.update(THRESHOLDED_METRICS)
ALL_METRICS.update(CLASSIFICATION_METRICS)
ALL_METRICS.update(REGRESSION_METRICS)
# Lists of metrics with common properties
# ---------------------------------------
# Lists of metrics with common properties are used to test systematically some
# functionalities and invariance, e.g. SYMMETRIC_METRICS lists all metrics that
# are symmetric with respect to their input argument y_true and y_pred.
#
# When you add a new metric or functionality, check if a general test
# is already written.
# Those metrics don't support binary inputs
METRIC_UNDEFINED_BINARY = [
"samples_f0.5_score",
"samples_f1_score",
"samples_f2_score",
"samples_precision_score",
"samples_recall_score",
"coverage_error",
"roc_auc_score",
"micro_roc_auc",
"weighted_roc_auc",
"macro_roc_auc",
"samples_roc_auc",
"average_precision_score",
"weighted_average_precision_score",
"micro_average_precision_score",
"macro_average_precision_score",
"samples_average_precision_score",
"label_ranking_loss",
"label_ranking_average_precision_score",
]
# Those metrics don't support multiclass inputs
METRIC_UNDEFINED_MULTICLASS = [
"brier_score_loss",
# with default average='binary', multiclass is prohibited
"precision_score",
"recall_score",
"f1_score",
"f2_score",
"f0.5_score",
]
# Metric undefined with "binary" or "multiclass" input
METRIC_UNDEFINED_BINARY_MULTICLASS = set(METRIC_UNDEFINED_BINARY).union(
set(METRIC_UNDEFINED_MULTICLASS))
# Metrics with an "average" argument
METRICS_WITH_AVERAGING = [
"precision_score", "recall_score", "f1_score", "f2_score", "f0.5_score"
]
# Threshold-based metrics with an "average" argument
THRESHOLDED_METRICS_WITH_AVERAGING = [
"roc_auc_score", "average_precision_score",
]
# Metrics with a "pos_label" argument
METRICS_WITH_POS_LABEL = [
"roc_curve",
"brier_score_loss",
"precision_score", "recall_score", "f1_score", "f2_score", "f0.5_score",
# pos_label support deprecated; to be removed in 0.18:
"weighted_f0.5_score", "weighted_f1_score", "weighted_f2_score",
"weighted_precision_score", "weighted_recall_score",
"micro_f0.5_score", "micro_f1_score", "micro_f2_score",
"micro_precision_score", "micro_recall_score",
"macro_f0.5_score", "macro_f1_score", "macro_f2_score",
"macro_precision_score", "macro_recall_score",
]
# Metrics with a "labels" argument
# TODO: Handle multi_class metrics that has a labels argument as well as a
# decision function argument. e.g hinge_loss
METRICS_WITH_LABELS = [
"confusion_matrix",
"hamming_loss",
"precision_score", "recall_score", "f1_score", "f2_score", "f0.5_score",
"weighted_f0.5_score", "weighted_f1_score", "weighted_f2_score",
"weighted_precision_score", "weighted_recall_score",
"micro_f0.5_score", "micro_f1_score", "micro_f2_score",
"micro_precision_score", "micro_recall_score",
"macro_f0.5_score", "macro_f1_score", "macro_f2_score",
"macro_precision_score", "macro_recall_score",
"cohen_kappa_score",
]
# Metrics with a "normalize" option
METRICS_WITH_NORMALIZE_OPTION = [
"accuracy_score",
"jaccard_similarity_score",
"zero_one_loss",
]
# Threshold-based metrics with "multilabel-indicator" format support
THRESHOLDED_MULTILABEL_METRICS = [
"log_loss",
"unnormalized_log_loss",
"roc_auc_score", "weighted_roc_auc", "samples_roc_auc",
"micro_roc_auc", "macro_roc_auc",
"average_precision_score", "weighted_average_precision_score",
"samples_average_precision_score", "micro_average_precision_score",
"macro_average_precision_score",
"coverage_error", "label_ranking_loss",
]
# Classification metrics with "multilabel-indicator" format
MULTILABELS_METRICS = [
"accuracy_score", "unnormalized_accuracy_score",
"hamming_loss",
"jaccard_similarity_score", "unnormalized_jaccard_similarity_score",
"zero_one_loss", "unnormalized_zero_one_loss",
"weighted_f0.5_score", "weighted_f1_score", "weighted_f2_score",
"weighted_precision_score", "weighted_recall_score",
"macro_f0.5_score", "macro_f1_score", "macro_f2_score",
"macro_precision_score", "macro_recall_score",
"micro_f0.5_score", "micro_f1_score", "micro_f2_score",
"micro_precision_score", "micro_recall_score",
"samples_f0.5_score", "samples_f1_score", "samples_f2_score",
"samples_precision_score", "samples_recall_score",
]
# Regression metrics with "multioutput-continuous" format support
MULTIOUTPUT_METRICS = [
"mean_absolute_error", "mean_squared_error", "r2_score",
"explained_variance_score"
]
# Symmetric with respect to their input arguments y_true and y_pred
# metric(y_true, y_pred) == metric(y_pred, y_true).
SYMMETRIC_METRICS = [
"accuracy_score", "unnormalized_accuracy_score",
"hamming_loss",
"jaccard_similarity_score", "unnormalized_jaccard_similarity_score",
"zero_one_loss", "unnormalized_zero_one_loss",
"f1_score", "micro_f1_score", "macro_f1_score",
"weighted_recall_score",
# P = R = F = accuracy in multiclass case
"micro_f0.5_score", "micro_f1_score", "micro_f2_score",
"micro_precision_score", "micro_recall_score",
"matthews_corrcoef_score", "mean_absolute_error", "mean_squared_error",
"median_absolute_error",
"cohen_kappa_score",
]
# Asymmetric with respect to their input arguments y_true and y_pred
# metric(y_true, y_pred) != metric(y_pred, y_true).
NOT_SYMMETRIC_METRICS = [
"explained_variance_score",
"r2_score",
"confusion_matrix",
"precision_score", "recall_score", "f2_score", "f0.5_score",
"weighted_f0.5_score", "weighted_f1_score", "weighted_f2_score",
"weighted_precision_score",
"macro_f0.5_score", "macro_f2_score", "macro_precision_score",
"macro_recall_score", "log_loss", "hinge_loss"
]
# No Sample weight support
METRICS_WITHOUT_SAMPLE_WEIGHT = [
"confusion_matrix", # Left this one here because the tests in this file do
# not work for confusion_matrix, as its output is a
# matrix instead of a number. Testing of
# confusion_matrix with sample_weight is in
# test_classification.py
"median_absolute_error",
]
@ignore_warnings
def test_symmetry():
# Test the symmetry of score and loss functions
random_state = check_random_state(0)
y_true = random_state.randint(0, 2, size=(20, ))
y_pred = random_state.randint(0, 2, size=(20, ))
# We shouldn't forget any metrics
assert_equal(set(SYMMETRIC_METRICS).union(
NOT_SYMMETRIC_METRICS, THRESHOLDED_METRICS,
METRIC_UNDEFINED_BINARY_MULTICLASS),
set(ALL_METRICS))
assert_equal(
set(SYMMETRIC_METRICS).intersection(set(NOT_SYMMETRIC_METRICS)),
set([]))
# Symmetric metric
for name in SYMMETRIC_METRICS:
metric = ALL_METRICS[name]
assert_almost_equal(metric(y_true, y_pred),
metric(y_pred, y_true),
err_msg="%s is not symmetric" % name)
# Not symmetric metrics
for name in NOT_SYMMETRIC_METRICS:
metric = ALL_METRICS[name]
assert_true(np.any(metric(y_true, y_pred) != metric(y_pred, y_true)),
msg="%s seems to be symmetric" % name)
@ignore_warnings
def test_sample_order_invariance():
random_state = check_random_state(0)
y_true = random_state.randint(0, 2, size=(20, ))
y_pred = random_state.randint(0, 2, size=(20, ))
y_true_shuffle, y_pred_shuffle = shuffle(y_true, y_pred, random_state=0)
for name, metric in ALL_METRICS.items():
if name in METRIC_UNDEFINED_BINARY_MULTICLASS:
continue
assert_almost_equal(metric(y_true, y_pred),
metric(y_true_shuffle, y_pred_shuffle),
err_msg="%s is not sample order invariant"
% name)
@ignore_warnings
def test_sample_order_invariance_multilabel_and_multioutput():
random_state = check_random_state(0)
# Generate some data
y_true = random_state.randint(0, 2, size=(20, 25))
y_pred = random_state.randint(0, 2, size=(20, 25))
y_score = random_state.normal(size=y_true.shape)
y_true_shuffle, y_pred_shuffle, y_score_shuffle = shuffle(y_true,
y_pred,
y_score,
random_state=0)
for name in MULTILABELS_METRICS:
metric = ALL_METRICS[name]
assert_almost_equal(metric(y_true, y_pred),
metric(y_true_shuffle, y_pred_shuffle),
err_msg="%s is not sample order invariant"
% name)
for name in THRESHOLDED_MULTILABEL_METRICS:
metric = ALL_METRICS[name]
assert_almost_equal(metric(y_true, y_score),
metric(y_true_shuffle, y_score_shuffle),
err_msg="%s is not sample order invariant"
% name)
for name in MULTIOUTPUT_METRICS:
metric = ALL_METRICS[name]
assert_almost_equal(metric(y_true, y_score),
metric(y_true_shuffle, y_score_shuffle),
err_msg="%s is not sample order invariant"
% name)
assert_almost_equal(metric(y_true, y_pred),
metric(y_true_shuffle, y_pred_shuffle),
err_msg="%s is not sample order invariant"
% name)
@ignore_warnings
def test_format_invariance_with_1d_vectors():
random_state = check_random_state(0)
y1 = random_state.randint(0, 2, size=(20, ))
y2 = random_state.randint(0, 2, size=(20, ))
y1_list = list(y1)
y2_list = list(y2)
y1_1d, y2_1d = np.array(y1), np.array(y2)
assert_equal(y1_1d.ndim, 1)
assert_equal(y2_1d.ndim, 1)
y1_column = np.reshape(y1_1d, (-1, 1))
y2_column = np.reshape(y2_1d, (-1, 1))
y1_row = np.reshape(y1_1d, (1, -1))
y2_row = np.reshape(y2_1d, (1, -1))
for name, metric in ALL_METRICS.items():
if name in METRIC_UNDEFINED_BINARY_MULTICLASS:
continue
measure = metric(y1, y2)
assert_almost_equal(metric(y1_list, y2_list), measure,
err_msg="%s is not representation invariant "
"with list" % name)
assert_almost_equal(metric(y1_1d, y2_1d), measure,
err_msg="%s is not representation invariant "
"with np-array-1d" % name)
assert_almost_equal(metric(y1_column, y2_column), measure,
err_msg="%s is not representation invariant "
"with np-array-column" % name)
# Mix format support
assert_almost_equal(metric(y1_1d, y2_list), measure,
err_msg="%s is not representation invariant "
"with mix np-array-1d and list" % name)
assert_almost_equal(metric(y1_list, y2_1d), measure,
err_msg="%s is not representation invariant "
"with mix np-array-1d and list" % name)
assert_almost_equal(metric(y1_1d, y2_column), measure,
err_msg="%s is not representation invariant "
"with mix np-array-1d and np-array-column"
% name)
assert_almost_equal(metric(y1_column, y2_1d), measure,
err_msg="%s is not representation invariant "
"with mix np-array-1d and np-array-column"
% name)
assert_almost_equal(metric(y1_list, y2_column), measure,
err_msg="%s is not representation invariant "
"with mix list and np-array-column"
% name)
assert_almost_equal(metric(y1_column, y2_list), measure,
err_msg="%s is not representation invariant "
"with mix list and np-array-column"
% name)
# These mix representations aren't allowed
assert_raises(ValueError, metric, y1_1d, y2_row)
assert_raises(ValueError, metric, y1_row, y2_1d)
assert_raises(ValueError, metric, y1_list, y2_row)
assert_raises(ValueError, metric, y1_row, y2_list)
assert_raises(ValueError, metric, y1_column, y2_row)
assert_raises(ValueError, metric, y1_row, y2_column)
# NB: We do not test for y1_row, y2_row as these may be
# interpreted as multilabel or multioutput data.
if (name not in (MULTIOUTPUT_METRICS + THRESHOLDED_MULTILABEL_METRICS +
MULTILABELS_METRICS)):
assert_raises(ValueError, metric, y1_row, y2_row)
@ignore_warnings
def test_invariance_string_vs_numbers_labels():
# Ensure that classification metrics with string labels
random_state = check_random_state(0)
y1 = random_state.randint(0, 2, size=(20, ))
y2 = random_state.randint(0, 2, size=(20, ))
y1_str = np.array(["eggs", "spam"])[y1]
y2_str = np.array(["eggs", "spam"])[y2]
pos_label_str = "spam"
labels_str = ["eggs", "spam"]
for name, metric in CLASSIFICATION_METRICS.items():
if name in METRIC_UNDEFINED_BINARY_MULTICLASS:
continue
measure_with_number = metric(y1, y2)
# Ugly, but handle case with a pos_label and label
metric_str = metric
if name in METRICS_WITH_POS_LABEL:
metric_str = partial(metric_str, pos_label=pos_label_str)
measure_with_str = metric_str(y1_str, y2_str)
assert_array_equal(measure_with_number, measure_with_str,
err_msg="{0} failed string vs number invariance "
"test".format(name))
measure_with_strobj = metric_str(y1_str.astype('O'),
y2_str.astype('O'))
assert_array_equal(measure_with_number, measure_with_strobj,
err_msg="{0} failed string object vs number "
"invariance test".format(name))
if name in METRICS_WITH_LABELS:
metric_str = partial(metric_str, labels=labels_str)
measure_with_str = metric_str(y1_str, y2_str)
assert_array_equal(measure_with_number, measure_with_str,
err_msg="{0} failed string vs number "
"invariance test".format(name))
measure_with_strobj = metric_str(y1_str.astype('O'),
y2_str.astype('O'))
assert_array_equal(measure_with_number, measure_with_strobj,
err_msg="{0} failed string vs number "
"invariance test".format(name))
for name, metric in THRESHOLDED_METRICS.items():
if name in ("log_loss", "hinge_loss", "unnormalized_log_loss",
"brier_score_loss"):
# Ugly, but handle case with a pos_label and label
metric_str = metric
if name in METRICS_WITH_POS_LABEL:
metric_str = partial(metric_str, pos_label=pos_label_str)
measure_with_number = metric(y1, y2)
measure_with_str = metric_str(y1_str, y2)
assert_array_equal(measure_with_number, measure_with_str,
err_msg="{0} failed string vs number "
"invariance test".format(name))
measure_with_strobj = metric(y1_str.astype('O'), y2)
assert_array_equal(measure_with_number, measure_with_strobj,
err_msg="{0} failed string object vs number "
"invariance test".format(name))
else:
# TODO those metrics doesn't support string label yet
assert_raises(ValueError, metric, y1_str, y2)
assert_raises(ValueError, metric, y1_str.astype('O'), y2)
def test_inf_nan_input():
invalids =[([0, 1], [np.inf, np.inf]),
([0, 1], [np.nan, np.nan]),
([0, 1], [np.nan, np.inf])]
METRICS = dict()
METRICS.update(THRESHOLDED_METRICS)
METRICS.update(REGRESSION_METRICS)
for metric in METRICS.values():
for y_true, y_score in invalids:
assert_raise_message(ValueError,
"contains NaN, infinity",
metric, y_true, y_score)
# Classification metrics all raise a mixed input exception
for metric in CLASSIFICATION_METRICS.values():
for y_true, y_score in invalids:
assert_raise_message(ValueError,
"Classification metrics can't handle a mix "
"of binary and continuous targets",
metric, y_true, y_score)
@ignore_warnings
def check_single_sample(name):
# Non-regression test: scores should work with a single sample.
# This is important for leave-one-out cross validation.
# Score functions tested are those that formerly called np.squeeze,
# which turns an array of size 1 into a 0-d array (!).
metric = ALL_METRICS[name]
# assert that no exception is thrown
for i, j in product([0, 1], repeat=2):
metric([i], [j])
@ignore_warnings
def check_single_sample_multioutput(name):
metric = ALL_METRICS[name]
for i, j, k, l in product([0, 1], repeat=4):
metric(np.array([[i, j]]), np.array([[k, l]]))
def test_single_sample():
for name in ALL_METRICS:
if (name in METRIC_UNDEFINED_BINARY_MULTICLASS or
name in THRESHOLDED_METRICS):
# Those metrics are not always defined with one sample
# or in multiclass classification
continue
yield check_single_sample, name
for name in MULTIOUTPUT_METRICS + MULTILABELS_METRICS:
yield check_single_sample_multioutput, name
def test_multioutput_number_of_output_differ():
y_true = np.array([[1, 0, 0, 1], [0, 1, 1, 1], [1, 1, 0, 1]])
y_pred = np.array([[0, 0], [1, 0], [0, 0]])
for name in MULTIOUTPUT_METRICS:
metric = ALL_METRICS[name]
assert_raises(ValueError, metric, y_true, y_pred)
def test_multioutput_regression_invariance_to_dimension_shuffling():
# test invariance to dimension shuffling
random_state = check_random_state(0)
y_true = random_state.uniform(0, 2, size=(20, 5))
y_pred = random_state.uniform(0, 2, size=(20, 5))
for name in MULTIOUTPUT_METRICS:
metric = ALL_METRICS[name]
error = metric(y_true, y_pred)
for _ in range(3):
perm = random_state.permutation(y_true.shape[1])
assert_almost_equal(metric(y_true[:, perm], y_pred[:, perm]),
error,
err_msg="%s is not dimension shuffling "
"invariant" % name)
@ignore_warnings
def test_multilabel_representation_invariance():
# Generate some data
n_classes = 4
n_samples = 50
_, y1 = make_multilabel_classification(n_features=1, n_classes=n_classes,
random_state=0, n_samples=n_samples,
allow_unlabeled=True)
_, y2 = make_multilabel_classification(n_features=1, n_classes=n_classes,
random_state=1, n_samples=n_samples,
allow_unlabeled=True)
# To make sure at least one empty label is present
y1 = np.vstack([y1, [[0] * n_classes]])
y2 = np.vstack([y2, [[0] * n_classes]])
y1_sparse_indicator = sp.coo_matrix(y1)
y2_sparse_indicator = sp.coo_matrix(y2)
for name in MULTILABELS_METRICS:
metric = ALL_METRICS[name]
# XXX cruel hack to work with partial functions
if isinstance(metric, partial):
metric.__module__ = 'tmp'
metric.__name__ = name
measure = metric(y1, y2)
# Check representation invariance
assert_almost_equal(metric(y1_sparse_indicator,
y2_sparse_indicator),
measure,
err_msg="%s failed representation invariance "
"between dense and sparse indicator "
"formats." % name)
def test_raise_value_error_multilabel_sequences():
# make sure the multilabel-sequence format raises ValueError
multilabel_sequences = [
[[0, 1]],
[[1], [2], [0, 1]],
[(), (2), (0, 1)],
[[]],
[()],
np.array([[], [1, 2]], dtype='object')]
for name in MULTILABELS_METRICS:
metric = ALL_METRICS[name]
for seq in multilabel_sequences:
assert_raises(ValueError, metric, seq, seq)
def test_normalize_option_binary_classification(n_samples=20):
# Test in the binary case
random_state = check_random_state(0)
y_true = random_state.randint(0, 2, size=(n_samples, ))
y_pred = random_state.randint(0, 2, size=(n_samples, ))
for name in METRICS_WITH_NORMALIZE_OPTION:
metrics = ALL_METRICS[name]
measure = metrics(y_true, y_pred, normalize=True)
assert_greater(measure, 0,
msg="We failed to test correctly the normalize option")
assert_almost_equal(metrics(y_true, y_pred, normalize=False)
/ n_samples, measure)
def test_normalize_option_multiclass_classification():
# Test in the multiclass case
random_state = check_random_state(0)
y_true = random_state.randint(0, 4, size=(20, ))
y_pred = random_state.randint(0, 4, size=(20, ))
n_samples = y_true.shape[0]
for name in METRICS_WITH_NORMALIZE_OPTION:
metrics = ALL_METRICS[name]
measure = metrics(y_true, y_pred, normalize=True)
assert_greater(measure, 0,
msg="We failed to test correctly the normalize option")
assert_almost_equal(metrics(y_true, y_pred, normalize=False)
/ n_samples, measure)
def test_normalize_option_multilabel_classification():
# Test in the multilabel case
n_classes = 4
n_samples = 100
# for both random_state 0 and 1, y_true and y_pred has at least one
# unlabelled entry
_, y_true = make_multilabel_classification(n_features=1,
n_classes=n_classes,
random_state=0,
allow_unlabeled=True,
n_samples=n_samples)
_, y_pred = make_multilabel_classification(n_features=1,
n_classes=n_classes,
random_state=1,
allow_unlabeled=True,
n_samples=n_samples)
# To make sure at least one empty label is present
y_true += [0]*n_classes
y_pred += [0]*n_classes
for name in METRICS_WITH_NORMALIZE_OPTION:
metrics = ALL_METRICS[name]
measure = metrics(y_true, y_pred, normalize=True)
assert_greater(measure, 0,
msg="We failed to test correctly the normalize option")
assert_almost_equal(metrics(y_true, y_pred, normalize=False)
/ n_samples, measure,
err_msg="Failed with %s" % name)
@ignore_warnings
def _check_averaging(metric, y_true, y_pred, y_true_binarize, y_pred_binarize,
is_multilabel):
n_samples, n_classes = y_true_binarize.shape
# No averaging
label_measure = metric(y_true, y_pred, average=None)
assert_array_almost_equal(label_measure,
[metric(y_true_binarize[:, i],
y_pred_binarize[:, i])
for i in range(n_classes)])
# Micro measure
micro_measure = metric(y_true, y_pred, average="micro")
assert_almost_equal(micro_measure, metric(y_true_binarize.ravel(),
y_pred_binarize.ravel()))
# Macro measure
macro_measure = metric(y_true, y_pred, average="macro")
assert_almost_equal(macro_measure, np.mean(label_measure))
# Weighted measure
weights = np.sum(y_true_binarize, axis=0, dtype=int)
if np.sum(weights) != 0:
weighted_measure = metric(y_true, y_pred, average="weighted")
assert_almost_equal(weighted_measure, np.average(label_measure,
weights=weights))
else:
weighted_measure = metric(y_true, y_pred, average="weighted")
assert_almost_equal(weighted_measure, 0)
# Sample measure
if is_multilabel:
sample_measure = metric(y_true, y_pred, average="samples")
assert_almost_equal(sample_measure,
np.mean([metric(y_true_binarize[i],
y_pred_binarize[i])
for i in range(n_samples)]))
assert_raises(ValueError, metric, y_true, y_pred, average="unknown")
assert_raises(ValueError, metric, y_true, y_pred, average="garbage")
def check_averaging(name, y_true, y_true_binarize, y_pred, y_pred_binarize,
y_score):
is_multilabel = type_of_target(y_true).startswith("multilabel")
metric = ALL_METRICS[name]
if name in METRICS_WITH_AVERAGING:
_check_averaging(metric, y_true, y_pred, y_true_binarize,
y_pred_binarize, is_multilabel)
elif name in THRESHOLDED_METRICS_WITH_AVERAGING:
_check_averaging(metric, y_true, y_score, y_true_binarize,
y_score, is_multilabel)
else:
raise ValueError("Metric is not recorded as having an average option")
def test_averaging_multiclass(n_samples=50, n_classes=3):
random_state = check_random_state(0)
y_true = random_state.randint(0, n_classes, size=(n_samples, ))
y_pred = random_state.randint(0, n_classes, size=(n_samples, ))
y_score = random_state.uniform(size=(n_samples, n_classes))
lb = LabelBinarizer().fit(y_true)
y_true_binarize = lb.transform(y_true)
y_pred_binarize = lb.transform(y_pred)
for name in METRICS_WITH_AVERAGING:
yield (_named_check(check_averaging, name), name, y_true,
y_true_binarize, y_pred, y_pred_binarize, y_score)
def test_averaging_multilabel(n_classes=5, n_samples=40):
_, y = make_multilabel_classification(n_features=1, n_classes=n_classes,
random_state=5, n_samples=n_samples,
allow_unlabeled=False)
y_true = y[:20]
y_pred = y[20:]
y_score = check_random_state(0).normal(size=(20, n_classes))
y_true_binarize = y_true
y_pred_binarize = y_pred
for name in METRICS_WITH_AVERAGING + THRESHOLDED_METRICS_WITH_AVERAGING:
yield (_named_check(check_averaging, name), name, y_true,
y_true_binarize, y_pred, y_pred_binarize, y_score)
def test_averaging_multilabel_all_zeroes():
y_true = np.zeros((20, 3))
y_pred = np.zeros((20, 3))
y_score = np.zeros((20, 3))
y_true_binarize = y_true
y_pred_binarize = y_pred
for name in METRICS_WITH_AVERAGING:
yield (_named_check(check_averaging, name), name, y_true,
y_true_binarize, y_pred, y_pred_binarize, y_score)
# Test _average_binary_score for weight.sum() == 0
binary_metric = (lambda y_true, y_score, average="macro":
_average_binary_score(
precision_score, y_true, y_score, average))
_check_averaging(binary_metric, y_true, y_pred, y_true_binarize,
y_pred_binarize, is_multilabel=True)
def test_averaging_multilabel_all_ones():
y_true = np.ones((20, 3))
y_pred = np.ones((20, 3))
y_score = np.ones((20, 3))
y_true_binarize = y_true
y_pred_binarize = y_pred
for name in METRICS_WITH_AVERAGING:
yield (_named_check(check_averaging, name), name, y_true,
y_true_binarize, y_pred, y_pred_binarize, y_score)
@ignore_warnings
def check_sample_weight_invariance(name, metric, y1, y2):
rng = np.random.RandomState(0)
sample_weight = rng.randint(1, 10, size=len(y1))
# check that unit weights gives the same score as no weight
unweighted_score = metric(y1, y2, sample_weight=None)
assert_almost_equal(
unweighted_score,
metric(y1, y2, sample_weight=np.ones(shape=len(y1))),
err_msg="For %s sample_weight=None is not equivalent to "
"sample_weight=ones" % name)
# check that the weighted and unweighted scores are unequal
weighted_score = metric(y1, y2, sample_weight=sample_weight)
assert_not_equal(
unweighted_score, weighted_score,
msg="Unweighted and weighted scores are unexpectedly "
"equal (%f) for %s" % (weighted_score, name))
# check that sample_weight can be a list
weighted_score_list = metric(y1, y2,
sample_weight=sample_weight.tolist())
assert_almost_equal(
weighted_score, weighted_score_list,
err_msg=("Weighted scores for array and list "
"sample_weight input are not equal (%f != %f) for %s") % (
weighted_score, weighted_score_list, name))
# check that integer weights is the same as repeated samples
repeat_weighted_score = metric(
np.repeat(y1, sample_weight, axis=0),
np.repeat(y2, sample_weight, axis=0), sample_weight=None)
assert_almost_equal(
weighted_score, repeat_weighted_score,
err_msg="Weighting %s is not equal to repeating samples" % name)
# check that ignoring a fraction of the samples is equivalent to setting
# the corresponding weights to zero
sample_weight_subset = sample_weight[1::2]
sample_weight_zeroed = np.copy(sample_weight)
sample_weight_zeroed[::2] = 0
y1_subset = y1[1::2]
y2_subset = y2[1::2]
weighted_score_subset = metric(y1_subset, y2_subset,
sample_weight=sample_weight_subset)
weighted_score_zeroed = metric(y1, y2,
sample_weight=sample_weight_zeroed)
assert_almost_equal(
weighted_score_subset, weighted_score_zeroed,
err_msg=("Zeroing weights does not give the same result as "
"removing the corresponding samples (%f != %f) for %s" %
(weighted_score_zeroed, weighted_score_subset, name)))
if not name.startswith('unnormalized'):
# check that the score is invariant under scaling of the weights by a
# common factor
for scaling in [2, 0.3]:
assert_almost_equal(
weighted_score,
metric(y1, y2, sample_weight=sample_weight * scaling),
err_msg="%s sample_weight is not invariant "
"under scaling" % name)
# Check that if sample_weight.shape[0] != y_true.shape[0], it raised an
# error
assert_raises(Exception, metric, y1, y2,
sample_weight=np.hstack([sample_weight, sample_weight]))
def test_sample_weight_invariance(n_samples=50):
random_state = check_random_state(0)
# regression
y_true = random_state.random_sample(size=(n_samples,))
y_pred = random_state.random_sample(size=(n_samples,))
for name in ALL_METRICS:
if name not in REGRESSION_METRICS:
continue
if name in METRICS_WITHOUT_SAMPLE_WEIGHT:
continue
metric = ALL_METRICS[name]
yield _named_check(check_sample_weight_invariance, name), name,\
metric, y_true, y_pred
# binary
random_state = check_random_state(0)
y_true = random_state.randint(0, 2, size=(n_samples, ))
y_pred = random_state.randint(0, 2, size=(n_samples, ))
y_score = random_state.random_sample(size=(n_samples,))
for name in ALL_METRICS:
if name in REGRESSION_METRICS:
continue
if (name in METRICS_WITHOUT_SAMPLE_WEIGHT or
name in METRIC_UNDEFINED_BINARY):
continue
metric = ALL_METRICS[name]
if name in THRESHOLDED_METRICS:
yield _named_check(check_sample_weight_invariance, name), name,\
metric, y_true, y_score
else:
yield _named_check(check_sample_weight_invariance, name), name,\
metric, y_true, y_pred
# multiclass
random_state = check_random_state(0)
y_true = random_state.randint(0, 5, size=(n_samples, ))
y_pred = random_state.randint(0, 5, size=(n_samples, ))
y_score = random_state.random_sample(size=(n_samples, 5))
for name in ALL_METRICS:
if name in REGRESSION_METRICS:
continue
if (name in METRICS_WITHOUT_SAMPLE_WEIGHT or
name in METRIC_UNDEFINED_BINARY_MULTICLASS):
continue
metric = ALL_METRICS[name]
if name in THRESHOLDED_METRICS:
yield _named_check(check_sample_weight_invariance, name), name,\
metric, y_true, y_score
else:
yield _named_check(check_sample_weight_invariance, name), name,\
metric, y_true, y_pred
# multilabel indicator
_, ya = make_multilabel_classification(n_features=1, n_classes=20,
random_state=0, n_samples=100,
allow_unlabeled=False)
_, yb = make_multilabel_classification(n_features=1, n_classes=20,
random_state=1, n_samples=100,
allow_unlabeled=False)
y_true = np.vstack([ya, yb])
y_pred = np.vstack([ya, ya])
y_score = random_state.randint(1, 4, size=y_true.shape)
for name in (MULTILABELS_METRICS + THRESHOLDED_MULTILABEL_METRICS +
MULTIOUTPUT_METRICS):
if name in METRICS_WITHOUT_SAMPLE_WEIGHT:
continue
metric = ALL_METRICS[name]
if name in THRESHOLDED_METRICS:
yield (_named_check(check_sample_weight_invariance, name), name,
metric, y_true, y_score)
else:
yield (_named_check(check_sample_weight_invariance, name), name,
metric, y_true, y_pred)
@ignore_warnings
def test_no_averaging_labels():
# test labels argument when not using averaging
# in multi-class and multi-label cases
y_true_multilabel = np.array([[1, 1, 0, 0], [1, 1, 0, 0]])
y_pred_multilabel = np.array([[0, 0, 1, 1], [0, 1, 1, 0]])
y_true_multiclass = np.array([0, 1, 2])
y_pred_multiclass = np.array([0, 2, 3])
labels = np.array([3, 0, 1, 2])
_, inverse_labels = np.unique(labels, return_inverse=True)
for name in METRICS_WITH_AVERAGING:
for y_true, y_pred in [[y_true_multiclass, y_pred_multiclass],
[y_true_multilabel, y_pred_multilabel]]:
if name not in MULTILABELS_METRICS and y_pred.ndim > 1:
continue
metric = ALL_METRICS[name]
score_labels = metric(y_true, y_pred, labels=labels, average=None)
score = metric(y_true, y_pred, average=None)
assert_array_equal(score_labels, score[inverse_labels])
| bsd-3-clause |
WangWenjun559/Weiss | summary/sumy/sklearn/preprocessing/tests/test_imputation.py | 213 | 11911 | import numpy as np
from scipy import sparse
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_false
from sklearn.utils.testing import assert_true
from sklearn.preprocessing.imputation import Imputer
from sklearn.pipeline import Pipeline
from sklearn import grid_search
from sklearn import tree
from sklearn.random_projection import sparse_random_matrix
def _check_statistics(X, X_true,
strategy, statistics, missing_values):
"""Utility function for testing imputation for a given strategy.
Test:
- along the two axes
- with dense and sparse arrays
Check that:
- the statistics (mean, median, mode) are correct
- the missing values are imputed correctly"""
err_msg = "Parameters: strategy = %s, missing_values = %s, " \
"axis = {0}, sparse = {1}" % (strategy, missing_values)
# Normal matrix, axis = 0
imputer = Imputer(missing_values, strategy=strategy, axis=0)
X_trans = imputer.fit(X).transform(X.copy())
assert_array_equal(imputer.statistics_, statistics,
err_msg.format(0, False))
assert_array_equal(X_trans, X_true, err_msg.format(0, False))
# Normal matrix, axis = 1
imputer = Imputer(missing_values, strategy=strategy, axis=1)
imputer.fit(X.transpose())
if np.isnan(statistics).any():
assert_raises(ValueError, imputer.transform, X.copy().transpose())
else:
X_trans = imputer.transform(X.copy().transpose())
assert_array_equal(X_trans, X_true.transpose(),
err_msg.format(1, False))
# Sparse matrix, axis = 0
imputer = Imputer(missing_values, strategy=strategy, axis=0)
imputer.fit(sparse.csc_matrix(X))
X_trans = imputer.transform(sparse.csc_matrix(X.copy()))
if sparse.issparse(X_trans):
X_trans = X_trans.toarray()
assert_array_equal(imputer.statistics_, statistics,
err_msg.format(0, True))
assert_array_equal(X_trans, X_true, err_msg.format(0, True))
# Sparse matrix, axis = 1
imputer = Imputer(missing_values, strategy=strategy, axis=1)
imputer.fit(sparse.csc_matrix(X.transpose()))
if np.isnan(statistics).any():
assert_raises(ValueError, imputer.transform,
sparse.csc_matrix(X.copy().transpose()))
else:
X_trans = imputer.transform(sparse.csc_matrix(X.copy().transpose()))
if sparse.issparse(X_trans):
X_trans = X_trans.toarray()
assert_array_equal(X_trans, X_true.transpose(),
err_msg.format(1, True))
def test_imputation_shape():
# Verify the shapes of the imputed matrix for different strategies.
X = np.random.randn(10, 2)
X[::2] = np.nan
for strategy in ['mean', 'median', 'most_frequent']:
imputer = Imputer(strategy=strategy)
X_imputed = imputer.fit_transform(X)
assert_equal(X_imputed.shape, (10, 2))
X_imputed = imputer.fit_transform(sparse.csr_matrix(X))
assert_equal(X_imputed.shape, (10, 2))
def test_imputation_mean_median_only_zero():
# Test imputation using the mean and median strategies, when
# missing_values == 0.
X = np.array([
[np.nan, 0, 0, 0, 5],
[np.nan, 1, 0, np.nan, 3],
[np.nan, 2, 0, 0, 0],
[np.nan, 6, 0, 5, 13],
])
X_imputed_mean = np.array([
[3, 5],
[1, 3],
[2, 7],
[6, 13],
])
statistics_mean = [np.nan, 3, np.nan, np.nan, 7]
# Behaviour of median with NaN is undefined, e.g. different results in
# np.median and np.ma.median
X_for_median = X[:, [0, 1, 2, 4]]
X_imputed_median = np.array([
[2, 5],
[1, 3],
[2, 5],
[6, 13],
])
statistics_median = [np.nan, 2, np.nan, 5]
_check_statistics(X, X_imputed_mean, "mean", statistics_mean, 0)
_check_statistics(X_for_median, X_imputed_median, "median",
statistics_median, 0)
def test_imputation_mean_median():
# Test imputation using the mean and median strategies, when
# missing_values != 0.
rng = np.random.RandomState(0)
dim = 10
dec = 10
shape = (dim * dim, dim + dec)
zeros = np.zeros(shape[0])
values = np.arange(1, shape[0]+1)
values[4::2] = - values[4::2]
tests = [("mean", "NaN", lambda z, v, p: np.mean(np.hstack((z, v)))),
("mean", 0, lambda z, v, p: np.mean(v)),
("median", "NaN", lambda z, v, p: np.median(np.hstack((z, v)))),
("median", 0, lambda z, v, p: np.median(v))]
for strategy, test_missing_values, true_value_fun in tests:
X = np.empty(shape)
X_true = np.empty(shape)
true_statistics = np.empty(shape[1])
# Create a matrix X with columns
# - with only zeros,
# - with only missing values
# - with zeros, missing values and values
# And a matrix X_true containing all true values
for j in range(shape[1]):
nb_zeros = (j - dec + 1 > 0) * (j - dec + 1) * (j - dec + 1)
nb_missing_values = max(shape[0] + dec * dec
- (j + dec) * (j + dec), 0)
nb_values = shape[0] - nb_zeros - nb_missing_values
z = zeros[:nb_zeros]
p = np.repeat(test_missing_values, nb_missing_values)
v = values[rng.permutation(len(values))[:nb_values]]
true_statistics[j] = true_value_fun(z, v, p)
# Create the columns
X[:, j] = np.hstack((v, z, p))
if 0 == test_missing_values:
X_true[:, j] = np.hstack((v,
np.repeat(
true_statistics[j],
nb_missing_values + nb_zeros)))
else:
X_true[:, j] = np.hstack((v,
z,
np.repeat(true_statistics[j],
nb_missing_values)))
# Shuffle them the same way
np.random.RandomState(j).shuffle(X[:, j])
np.random.RandomState(j).shuffle(X_true[:, j])
# Mean doesn't support columns containing NaNs, median does
if strategy == "median":
cols_to_keep = ~np.isnan(X_true).any(axis=0)
else:
cols_to_keep = ~np.isnan(X_true).all(axis=0)
X_true = X_true[:, cols_to_keep]
_check_statistics(X, X_true, strategy,
true_statistics, test_missing_values)
def test_imputation_median_special_cases():
# Test median imputation with sparse boundary cases
X = np.array([
[0, np.nan, np.nan], # odd: implicit zero
[5, np.nan, np.nan], # odd: explicit nonzero
[0, 0, np.nan], # even: average two zeros
[-5, 0, np.nan], # even: avg zero and neg
[0, 5, np.nan], # even: avg zero and pos
[4, 5, np.nan], # even: avg nonzeros
[-4, -5, np.nan], # even: avg negatives
[-1, 2, np.nan], # even: crossing neg and pos
]).transpose()
X_imputed_median = np.array([
[0, 0, 0],
[5, 5, 5],
[0, 0, 0],
[-5, 0, -2.5],
[0, 5, 2.5],
[4, 5, 4.5],
[-4, -5, -4.5],
[-1, 2, .5],
]).transpose()
statistics_median = [0, 5, 0, -2.5, 2.5, 4.5, -4.5, .5]
_check_statistics(X, X_imputed_median, "median",
statistics_median, 'NaN')
def test_imputation_most_frequent():
# Test imputation using the most-frequent strategy.
X = np.array([
[-1, -1, 0, 5],
[-1, 2, -1, 3],
[-1, 1, 3, -1],
[-1, 2, 3, 7],
])
X_true = np.array([
[2, 0, 5],
[2, 3, 3],
[1, 3, 3],
[2, 3, 7],
])
# scipy.stats.mode, used in Imputer, doesn't return the first most
# frequent as promised in the doc but the lowest most frequent. When this
# test will fail after an update of scipy, Imputer will need to be updated
# to be consistent with the new (correct) behaviour
_check_statistics(X, X_true, "most_frequent", [np.nan, 2, 3, 3], -1)
def test_imputation_pipeline_grid_search():
# Test imputation within a pipeline + gridsearch.
pipeline = Pipeline([('imputer', Imputer(missing_values=0)),
('tree', tree.DecisionTreeRegressor(random_state=0))])
parameters = {
'imputer__strategy': ["mean", "median", "most_frequent"],
'imputer__axis': [0, 1]
}
l = 100
X = sparse_random_matrix(l, l, density=0.10)
Y = sparse_random_matrix(l, 1, density=0.10).toarray()
gs = grid_search.GridSearchCV(pipeline, parameters)
gs.fit(X, Y)
def test_imputation_pickle():
# Test for pickling imputers.
import pickle
l = 100
X = sparse_random_matrix(l, l, density=0.10)
for strategy in ["mean", "median", "most_frequent"]:
imputer = Imputer(missing_values=0, strategy=strategy)
imputer.fit(X)
imputer_pickled = pickle.loads(pickle.dumps(imputer))
assert_array_equal(imputer.transform(X.copy()),
imputer_pickled.transform(X.copy()),
"Fail to transform the data after pickling "
"(strategy = %s)" % (strategy))
def test_imputation_copy():
# Test imputation with copy
X_orig = sparse_random_matrix(5, 5, density=0.75, random_state=0)
# copy=True, dense => copy
X = X_orig.copy().toarray()
imputer = Imputer(missing_values=0, strategy="mean", copy=True)
Xt = imputer.fit(X).transform(X)
Xt[0, 0] = -1
assert_false(np.all(X == Xt))
# copy=True, sparse csr => copy
X = X_orig.copy()
imputer = Imputer(missing_values=X.data[0], strategy="mean", copy=True)
Xt = imputer.fit(X).transform(X)
Xt.data[0] = -1
assert_false(np.all(X.data == Xt.data))
# copy=False, dense => no copy
X = X_orig.copy().toarray()
imputer = Imputer(missing_values=0, strategy="mean", copy=False)
Xt = imputer.fit(X).transform(X)
Xt[0, 0] = -1
assert_true(np.all(X == Xt))
# copy=False, sparse csr, axis=1 => no copy
X = X_orig.copy()
imputer = Imputer(missing_values=X.data[0], strategy="mean",
copy=False, axis=1)
Xt = imputer.fit(X).transform(X)
Xt.data[0] = -1
assert_true(np.all(X.data == Xt.data))
# copy=False, sparse csc, axis=0 => no copy
X = X_orig.copy().tocsc()
imputer = Imputer(missing_values=X.data[0], strategy="mean",
copy=False, axis=0)
Xt = imputer.fit(X).transform(X)
Xt.data[0] = -1
assert_true(np.all(X.data == Xt.data))
# copy=False, sparse csr, axis=0 => copy
X = X_orig.copy()
imputer = Imputer(missing_values=X.data[0], strategy="mean",
copy=False, axis=0)
Xt = imputer.fit(X).transform(X)
Xt.data[0] = -1
assert_false(np.all(X.data == Xt.data))
# copy=False, sparse csc, axis=1 => copy
X = X_orig.copy().tocsc()
imputer = Imputer(missing_values=X.data[0], strategy="mean",
copy=False, axis=1)
Xt = imputer.fit(X).transform(X)
Xt.data[0] = -1
assert_false(np.all(X.data == Xt.data))
# copy=False, sparse csr, axis=1, missing_values=0 => copy
X = X_orig.copy()
imputer = Imputer(missing_values=0, strategy="mean",
copy=False, axis=1)
Xt = imputer.fit(X).transform(X)
assert_false(sparse.issparse(Xt))
# Note: If X is sparse and if missing_values=0, then a (dense) copy of X is
# made, even if copy=False.
| apache-2.0 |
aestrivex/mne-python | examples/time_frequency/plot_compute_source_psd_epochs.py | 19 | 2833 | """
=====================================================================
Compute Power Spectral Density of inverse solution from single epochs
=====================================================================
Compute PSD of dSPM inverse solution on single trial epochs restricted
to a brain label. The PSD is computed using a multi-taper method with
Discrete Prolate Spheroidal Sequence (DPSS) windows.
"""
# Author: Martin Luessi <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.io import Raw
from mne.minimum_norm import read_inverse_operator, compute_source_psd_epochs
print(__doc__)
data_path = sample.data_path()
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
fname_raw = data_path + '/MEG/sample/sample_audvis_raw.fif'
fname_event = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
label_name = 'Aud-lh'
fname_label = data_path + '/MEG/sample/labels/%s.label' % label_name
event_id, tmin, tmax = 1, -0.2, 0.5
snr = 1.0 # use smaller SNR for raw data
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
# Load data
inverse_operator = read_inverse_operator(fname_inv)
label = mne.read_label(fname_label)
raw = Raw(fname_raw)
events = mne.read_events(fname_event)
# Set up pick list
include = []
raw.info['bads'] += ['EEG 053'] # bads + 1 more
# pick MEG channels
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,
include=include, exclude='bads')
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(mag=4e-12, grad=4000e-13,
eog=150e-6))
# define frequencies of interest
fmin, fmax = 0., 70.
bandwidth = 4. # bandwidth of the windows in Hz
# compute source space psd in label
# Note: By using "return_generator=True" stcs will be a generator object
# instead of a list. This allows us so to iterate without having to
# keep everything in memory.
stcs = compute_source_psd_epochs(epochs, inverse_operator, lambda2=lambda2,
method=method, fmin=fmin, fmax=fmax,
bandwidth=bandwidth, label=label,
return_generator=True)
# compute average PSD over the first 10 epochs
n_epochs = 10
for i, stc in enumerate(stcs):
if i >= n_epochs:
break
if i == 0:
psd_avg = np.mean(stc.data, axis=0)
else:
psd_avg += np.mean(stc.data, axis=0)
psd_avg /= n_epochs
freqs = stc.times # the frequencies are stored here
plt.figure()
plt.plot(freqs, psd_avg)
plt.xlabel('Freq (Hz)')
plt.ylabel('Power Spectral Density')
plt.show()
| bsd-3-clause |
frank-tancf/scikit-learn | doc/tutorial/text_analytics/skeletons/exercise_02_sentiment.py | 157 | 2409 | """Build a sentiment analysis / polarity model
Sentiment analysis can be casted as a binary text classification problem,
that is fitting a linear classifier on features extracted from the text
of the user messages so as to guess wether the opinion of the author is
positive or negative.
In this examples we will use a movie review dataset.
"""
# Author: Olivier Grisel <[email protected]>
# License: Simplified BSD
import sys
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.svm import LinearSVC
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.datasets import load_files
from sklearn.model_selection import train_test_split
from sklearn import metrics
if __name__ == "__main__":
# NOTE: we put the following in a 'if __name__ == "__main__"' protected
# block to be able to use a multi-core grid search that also works under
# Windows, see: http://docs.python.org/library/multiprocessing.html#windows
# The multiprocessing module is used as the backend of joblib.Parallel
# that is used when n_jobs != 1 in GridSearchCV
# the training data folder must be passed as first argument
movie_reviews_data_folder = sys.argv[1]
dataset = load_files(movie_reviews_data_folder, shuffle=False)
print("n_samples: %d" % len(dataset.data))
# split the dataset in training and test set:
docs_train, docs_test, y_train, y_test = train_test_split(
dataset.data, dataset.target, test_size=0.25, random_state=None)
# TASK: Build a vectorizer / classifier pipeline that filters out tokens
# that are too rare or too frequent
# TASK: Build a grid search to find out whether unigrams or bigrams are
# more useful.
# Fit the pipeline on the training set using grid search for the parameters
# TASK: print the cross-validated scores for the each parameters set
# explored by the grid search
# TASK: Predict the outcome on the testing set and store it in a variable
# named y_predicted
# Print the classification report
print(metrics.classification_report(y_test, y_predicted,
target_names=dataset.target_names))
# Print and plot the confusion matrix
cm = metrics.confusion_matrix(y_test, y_predicted)
print(cm)
# import matplotlib.pyplot as plt
# plt.matshow(cm)
# plt.show()
| bsd-3-clause |
jseabold/statsmodels | statsmodels/genmod/generalized_estimating_equations.py | 4 | 115703 | """
Procedures for fitting marginal regression models to dependent data
using Generalized Estimating Equations.
References
----------
KY Liang and S Zeger. "Longitudinal data analysis using
generalized linear models". Biometrika (1986) 73 (1): 13-22.
S Zeger and KY Liang. "Longitudinal Data Analysis for Discrete and
Continuous Outcomes". Biometrics Vol. 42, No. 1 (Mar., 1986),
pp. 121-130
A Rotnitzky and NP Jewell (1990). "Hypothesis testing of regression
parameters in semiparametric generalized linear models for cluster
correlated data", Biometrika, 77, 485-497.
Xu Guo and Wei Pan (2002). "Small sample performance of the score
test in GEE".
http://www.sph.umn.edu/faculty1/wp-content/uploads/2012/11/rr2002-013.pdf
LA Mancl LA, TA DeRouen (2001). A covariance estimator for GEE with
improved small-sample properties. Biometrics. 2001 Mar;57(1):126-34.
"""
from statsmodels.compat.python import lzip
from statsmodels.compat.pandas import Appender
import numpy as np
from scipy import stats
import pandas as pd
import patsy
from collections import defaultdict
from statsmodels.tools.decorators import cache_readonly
import statsmodels.base.model as base
# used for wrapper:
import statsmodels.regression.linear_model as lm
import statsmodels.base.wrapper as wrap
from statsmodels.genmod import families
from statsmodels.genmod.generalized_linear_model import GLM, GLMResults
from statsmodels.genmod import cov_struct as cov_structs
import statsmodels.genmod.families.varfuncs as varfuncs
from statsmodels.genmod.families.links import Link
from statsmodels.tools.sm_exceptions import (ConvergenceWarning,
DomainWarning,
IterationLimitWarning,
ValueWarning)
import warnings
from statsmodels.graphics._regressionplots_doc import (
_plot_added_variable_doc,
_plot_partial_residuals_doc,
_plot_ceres_residuals_doc)
from statsmodels.discrete.discrete_margins import (
_get_margeff_exog, _check_margeff_args, _effects_at, margeff_cov_with_se,
_check_at_is_all, _transform_names, _check_discrete_args,
_get_dummy_index, _get_count_index)
class ParameterConstraint(object):
"""
A class for managing linear equality constraints for a parameter
vector.
"""
def __init__(self, lhs, rhs, exog):
"""
Parameters
----------
lhs : ndarray
A q x p matrix which is the left hand side of the
constraint lhs * param = rhs. The number of constraints is
q >= 1 and p is the dimension of the parameter vector.
rhs : ndarray
A 1-dimensional vector of length q which is the right hand
side of the constraint equation.
exog : ndarray
The n x p exognenous data for the full model.
"""
# In case a row or column vector is passed (patsy linear
# constraints passes a column vector).
rhs = np.atleast_1d(rhs.squeeze())
if rhs.ndim > 1:
raise ValueError("The right hand side of the constraint "
"must be a vector.")
if len(rhs) != lhs.shape[0]:
raise ValueError("The number of rows of the left hand "
"side constraint matrix L must equal "
"the length of the right hand side "
"constraint vector R.")
self.lhs = lhs
self.rhs = rhs
# The columns of lhs0 are an orthogonal basis for the
# orthogonal complement to row(lhs), the columns of lhs1 are
# an orthogonal basis for row(lhs). The columns of lhsf =
# [lhs0, lhs1] are mutually orthogonal.
lhs_u, lhs_s, lhs_vt = np.linalg.svd(lhs.T, full_matrices=1)
self.lhs0 = lhs_u[:, len(lhs_s):]
self.lhs1 = lhs_u[:, 0:len(lhs_s)]
self.lhsf = np.hstack((self.lhs0, self.lhs1))
# param0 is one solution to the underdetermined system
# L * param = R.
self.param0 = np.dot(self.lhs1, np.dot(lhs_vt, self.rhs) /
lhs_s)
self._offset_increment = np.dot(exog, self.param0)
self.orig_exog = exog
self.exog_fulltrans = np.dot(exog, self.lhsf)
def offset_increment(self):
"""
Returns a vector that should be added to the offset vector to
accommodate the constraint.
Parameters
----------
exog : array_like
The exogeneous data for the model.
"""
return self._offset_increment
def reduced_exog(self):
"""
Returns a linearly transformed exog matrix whose columns span
the constrained model space.
Parameters
----------
exog : array_like
The exogeneous data for the model.
"""
return self.exog_fulltrans[:, 0:self.lhs0.shape[1]]
def restore_exog(self):
"""
Returns the full exog matrix before it was reduced to
satisfy the constraint.
"""
return self.orig_exog
def unpack_param(self, params):
"""
Converts the parameter vector `params` from reduced to full
coordinates.
"""
return self.param0 + np.dot(self.lhs0, params)
def unpack_cov(self, bcov):
"""
Converts the covariance matrix `bcov` from reduced to full
coordinates.
"""
return np.dot(self.lhs0, np.dot(bcov, self.lhs0.T))
_gee_init_doc = """
Marginal regression model fit using Generalized Estimating Equations.
GEE can be used to fit Generalized Linear Models (GLMs) when the
data have a grouped structure, and the observations are possibly
correlated within groups but not between groups.
Parameters
----------
endog : array_like
1d array of endogenous values (i.e. responses, outcomes,
dependent variables, or 'Y' values).
exog : array_like
2d array of exogeneous values (i.e. covariates, predictors,
independent variables, regressors, or 'X' values). A `nobs x
k` array where `nobs` is the number of observations and `k` is
the number of regressors. An intercept is not included by
default and should be added by the user. See
`statsmodels.tools.add_constant`.
groups : array_like
A 1d array of length `nobs` containing the group labels.
time : array_like
A 2d array of time (or other index) values, used by some
dependence structures to define similarity relationships among
observations within a cluster.
family : family class instance
%(family_doc)s
cov_struct : CovStruct class instance
The default is Independence. To specify an exchangeable
structure use cov_struct = Exchangeable(). See
statsmodels.genmod.cov_struct.CovStruct for more
information.
offset : array_like
An offset to be included in the fit. If provided, must be
an array whose length is the number of rows in exog.
dep_data : array_like
Additional data passed to the dependence structure.
constraint : (ndarray, ndarray)
If provided, the constraint is a tuple (L, R) such that the
model parameters are estimated under the constraint L *
param = R, where L is a q x p matrix and R is a
q-dimensional vector. If constraint is provided, a score
test is performed to compare the constrained model to the
unconstrained model.
update_dep : bool
If true, the dependence parameters are optimized, otherwise
they are held fixed at their starting values.
weights : array_like
An array of case weights to use in the analysis.
%(extra_params)s
See Also
--------
statsmodels.genmod.families.family
:ref:`families`
:ref:`links`
Notes
-----
Only the following combinations make sense for family and link ::
+ ident log logit probit cloglog pow opow nbinom loglog logc
Gaussian | x x x
inv Gaussian | x x x
binomial | x x x x x x x x x
Poisson | x x x
neg binomial | x x x x
gamma | x x x
Not all of these link functions are currently available.
Endog and exog are references so that if the data they refer
to are already arrays and these arrays are changed, endog and
exog will change.
The "robust" covariance type is the standard "sandwich estimator"
(e.g. Liang and Zeger (1986)). It is the default here and in most
other packages. The "naive" estimator gives smaller standard
errors, but is only correct if the working correlation structure
is correctly specified. The "bias reduced" estimator of Mancl and
DeRouen (Biometrics, 2001) reduces the downward bias of the robust
estimator.
The robust covariance provided here follows Liang and Zeger (1986)
and agrees with R's gee implementation. To obtain the robust
standard errors reported in Stata, multiply by sqrt(N / (N - g)),
where N is the total sample size, and g is the average group size.
%(notes)s
Examples
--------
%(example)s
"""
_gee_nointercept = """
The nominal and ordinal GEE models should not have an intercept
(either implicit or explicit). Use "0 + " in a formula to
suppress the intercept.
"""
_gee_family_doc = """\
The default is Gaussian. To specify the binomial
distribution use `family=sm.families.Binomial()`. Each family
can take a link instance as an argument. See
statsmodels.genmod.families.family for more information."""
_gee_ordinal_family_doc = """\
The only family supported is `Binomial`. The default `Logit`
link may be replaced with `probit` if desired."""
_gee_nominal_family_doc = """\
The default value `None` uses a multinomial logit family
specifically designed for use with GEE. Setting this
argument to a non-default value is not currently supported."""
_gee_fit_doc = """
Fits a marginal regression model using generalized estimating
equations (GEE).
Parameters
----------
maxiter : int
The maximum number of iterations
ctol : float
The convergence criterion for stopping the Gauss-Seidel
iterations
start_params : array_like
A vector of starting values for the regression
coefficients. If None, a default is chosen.
params_niter : int
The number of Gauss-Seidel updates of the mean structure
parameters that take place prior to each update of the
dependence structure.
first_dep_update : int
No dependence structure updates occur before this
iteration number.
cov_type : str
One of "robust", "naive", or "bias_reduced".
ddof_scale : scalar or None
The scale parameter is estimated as the sum of squared
Pearson residuals divided by `N - ddof_scale`, where N
is the total sample size. If `ddof_scale` is None, the
number of covariates (including an intercept if present)
is used.
scaling_factor : scalar
The estimated covariance of the parameter estimates is
scaled by this value. Default is 1, Stata uses N / (N - g),
where N is the total sample size and g is the average group
size.
scale : str or float, optional
`scale` can be None, 'X2', or a float
If a float, its value is used as the scale parameter.
The default value is None, which uses `X2` (Pearson's
chi-square) for Gamma, Gaussian, and Inverse Gaussian.
The default is 1 for the Binomial and Poisson families.
Returns
-------
An instance of the GEEResults class or subclass
Notes
-----
If convergence difficulties occur, increase the values of
`first_dep_update` and/or `params_niter`. Setting
`first_dep_update` to a greater value (e.g. ~10-20) causes the
algorithm to move close to the GLM solution before attempting
to identify the dependence structure.
For the Gaussian family, there is no benefit to setting
`params_niter` to a value greater than 1, since the mean
structure parameters converge in one step.
"""
_gee_results_doc = """
Attributes
----------
cov_params_default : ndarray
default covariance of the parameter estimates. Is chosen among one
of the following three based on `cov_type`
cov_robust : ndarray
covariance of the parameter estimates that is robust
cov_naive : ndarray
covariance of the parameter estimates that is not robust to
correlation or variance misspecification
cov_robust_bc : ndarray
covariance of the parameter estimates that is robust and bias
reduced
converged : bool
indicator for convergence of the optimization.
True if the norm of the score is smaller than a threshold
cov_type : str
string indicating whether a "robust", "naive" or "bias_reduced"
covariance is used as default
fit_history : dict
Contains information about the iterations.
fittedvalues : ndarray
Linear predicted values for the fitted model.
dot(exog, params)
model : class instance
Pointer to GEE model instance that called `fit`.
normalized_cov_params : ndarray
See GEE docstring
params : ndarray
The coefficients of the fitted model. Note that
interpretation of the coefficients often depends on the
distribution family and the data.
scale : float
The estimate of the scale / dispersion for the model fit.
See GEE.fit for more information.
score_norm : float
norm of the score at the end of the iterative estimation.
bse : ndarray
The standard errors of the fitted GEE parameters.
"""
_gee_example = """
Logistic regression with autoregressive working dependence:
>>> import statsmodels.api as sm
>>> family = sm.families.Binomial()
>>> va = sm.cov_struct.Autoregressive()
>>> model = sm.GEE(endog, exog, group, family=family, cov_struct=va)
>>> result = model.fit()
>>> print(result.summary())
Use formulas to fit a Poisson GLM with independent working
dependence:
>>> import statsmodels.api as sm
>>> fam = sm.families.Poisson()
>>> ind = sm.cov_struct.Independence()
>>> model = sm.GEE.from_formula("y ~ age + trt + base", "subject",
data, cov_struct=ind, family=fam)
>>> result = model.fit()
>>> print(result.summary())
Equivalent, using the formula API:
>>> import statsmodels.api as sm
>>> import statsmodels.formula.api as smf
>>> fam = sm.families.Poisson()
>>> ind = sm.cov_struct.Independence()
>>> model = smf.gee("y ~ age + trt + base", "subject",
data, cov_struct=ind, family=fam)
>>> result = model.fit()
>>> print(result.summary())
"""
_gee_ordinal_example = """
Fit an ordinal regression model using GEE, with "global
odds ratio" dependence:
>>> import statsmodels.api as sm
>>> gor = sm.cov_struct.GlobalOddsRatio("ordinal")
>>> model = sm.OrdinalGEE(endog, exog, groups, cov_struct=gor)
>>> result = model.fit()
>>> print(result.summary())
Using formulas:
>>> import statsmodels.formula.api as smf
>>> model = smf.ordinal_gee("y ~ 0 + x1 + x2", groups, data,
cov_struct=gor)
>>> result = model.fit()
>>> print(result.summary())
"""
_gee_nominal_example = """
Fit a nominal regression model using GEE:
>>> import statsmodels.api as sm
>>> import statsmodels.formula.api as smf
>>> gor = sm.cov_struct.GlobalOddsRatio("nominal")
>>> model = sm.NominalGEE(endog, exog, groups, cov_struct=gor)
>>> result = model.fit()
>>> print(result.summary())
Using formulas:
>>> import statsmodels.api as sm
>>> model = sm.NominalGEE.from_formula("y ~ 0 + x1 + x2", groups,
data, cov_struct=gor)
>>> result = model.fit()
>>> print(result.summary())
Using the formula API:
>>> import statsmodels.formula.api as smf
>>> model = smf.nominal_gee("y ~ 0 + x1 + x2", groups, data,
cov_struct=gor)
>>> result = model.fit()
>>> print(result.summary())
"""
def _check_args(endog, exog, groups, time, offset, exposure):
if endog.size != exog.shape[0]:
raise ValueError("Leading dimension of 'exog' should match "
"length of 'endog'")
if groups.size != endog.size:
raise ValueError("'groups' and 'endog' should have the same size")
if time is not None and (time.size != endog.size):
raise ValueError("'time' and 'endog' should have the same size")
if offset is not None and (offset.size != endog.size):
raise ValueError("'offset and 'endog' should have the same size")
if exposure is not None and (exposure.size != endog.size):
raise ValueError("'exposure' and 'endog' should have the same size")
class GEE(GLM):
__doc__ = (
" Marginal Regression Model using Generalized Estimating "
"Equations.\n" + _gee_init_doc %
{'extra_params': base._missing_param_doc,
'family_doc': _gee_family_doc,
'example': _gee_example,
'notes': ""})
cached_means = None
def __init__(self, endog, exog, groups, time=None, family=None,
cov_struct=None, missing='none', offset=None,
exposure=None, dep_data=None, constraint=None,
update_dep=True, weights=None, **kwargs):
if family is not None:
if not isinstance(family.link, tuple(family.safe_links)):
import warnings
msg = ("The {0} link function does not respect the "
"domain of the {1} family.")
warnings.warn(msg.format(family.link.__class__.__name__,
family.__class__.__name__),
DomainWarning)
groups = np.asarray(groups) # in case groups is pandas
if "missing_idx" in kwargs and kwargs["missing_idx"] is not None:
# If here, we are entering from super.from_formula; missing
# has already been dropped from endog and exog, but not from
# the other variables.
ii = ~kwargs["missing_idx"]
groups = groups[ii]
if time is not None:
time = time[ii]
if offset is not None:
offset = offset[ii]
if exposure is not None:
exposure = exposure[ii]
del kwargs["missing_idx"]
_check_args(endog, exog, groups, time, offset, exposure)
self.missing = missing
self.dep_data = dep_data
self.constraint = constraint
self.update_dep = update_dep
self._fit_history = defaultdict(list)
# Pass groups, time, offset, and dep_data so they are
# processed for missing data along with endog and exog.
# Calling super creates self.exog, self.endog, etc. as
# ndarrays and the original exog, endog, etc. are
# self.data.endog, etc.
super(GEE, self).__init__(endog, exog, groups=groups,
time=time, offset=offset,
exposure=exposure, weights=weights,
dep_data=dep_data, missing=missing,
family=family, **kwargs)
self._init_keys.extend(["update_dep", "constraint", "family",
"cov_struct"])
# Handle the family argument
if family is None:
family = families.Gaussian()
else:
if not issubclass(family.__class__, families.Family):
raise ValueError("GEE: `family` must be a genmod "
"family instance")
self.family = family
# Handle the cov_struct argument
if cov_struct is None:
cov_struct = cov_structs.Independence()
else:
if not issubclass(cov_struct.__class__, cov_structs.CovStruct):
raise ValueError("GEE: `cov_struct` must be a genmod "
"cov_struct instance")
self.cov_struct = cov_struct
# Handle the constraint
self.constraint = None
if constraint is not None:
if len(constraint) != 2:
raise ValueError("GEE: `constraint` must be a 2-tuple.")
if constraint[0].shape[1] != self.exog.shape[1]:
raise ValueError(
"GEE: the left hand side of the constraint must have "
"the same number of columns as the exog matrix.")
self.constraint = ParameterConstraint(constraint[0],
constraint[1],
self.exog)
if self._offset_exposure is not None:
self._offset_exposure += self.constraint.offset_increment()
else:
self._offset_exposure = (
self.constraint.offset_increment().copy())
self.exog = self.constraint.reduced_exog()
# Create list of row indices for each group
group_labels, ix = np.unique(self.groups, return_inverse=True)
se = pd.Series(index=np.arange(len(ix)), dtype="int")
gb = se.groupby(ix).groups
dk = [(lb, np.asarray(gb[k])) for k, lb in enumerate(group_labels)]
self.group_indices = dict(dk)
self.group_labels = group_labels
# Convert the data to the internal representation, which is a
# list of arrays, corresponding to the groups.
self.endog_li = self.cluster_list(self.endog)
self.exog_li = self.cluster_list(self.exog)
if self.weights is not None:
self.weights_li = self.cluster_list(self.weights)
self.num_group = len(self.endog_li)
# Time defaults to a 1d grid with equal spacing
if self.time is not None:
if self.time.ndim == 1:
self.time = self.time[:, None]
self.time_li = self.cluster_list(self.time)
else:
self.time_li = \
[np.arange(len(y), dtype=np.float64)[:, None]
for y in self.endog_li]
self.time = np.concatenate(self.time_li)
if (self._offset_exposure is None or
(np.isscalar(self._offset_exposure) and
self._offset_exposure == 0.)):
self.offset_li = None
else:
self.offset_li = self.cluster_list(self._offset_exposure)
if constraint is not None:
self.constraint.exog_fulltrans_li = \
self.cluster_list(self.constraint.exog_fulltrans)
self.family = family
self.cov_struct.initialize(self)
# Total sample size
group_ns = [len(y) for y in self.endog_li]
self.nobs = sum(group_ns)
# The following are column based, not on rank see #1928
self.df_model = self.exog.shape[1] - 1 # assumes constant
self.df_resid = self.nobs - self.exog.shape[1]
# Skip the covariance updates if all groups have a single
# observation (reduces to fitting a GLM).
maxgroup = max([len(x) for x in self.endog_li])
if maxgroup == 1:
self.update_dep = False
# Override to allow groups and time to be passed as variable
# names.
@classmethod
def from_formula(cls, formula, groups, data, subset=None,
time=None, offset=None, exposure=None,
*args, **kwargs):
"""
Create a GEE model instance from a formula and dataframe.
Parameters
----------
formula : str or generic Formula object
The formula specifying the model
groups : array_like or string
Array of grouping labels. If a string, this is the name
of a variable in `data` that contains the grouping labels.
data : array_like
The data for the model.
subset : array_like
An array-like object of booleans, integers, or index
values that indicate the subset of the data to used when
fitting the model.
time : array_like or string
The time values, used for dependence structures involving
distances between observations. If a string, this is the
name of a variable in `data` that contains the time
values.
offset : array_like or string
The offset values, added to the linear predictor. If a
string, this is the name of a variable in `data` that
contains the offset values.
exposure : array_like or string
The exposure values, only used if the link function is the
logarithm function, in which case the log of `exposure`
is added to the offset (if any). If a string, this is the
name of a variable in `data` that contains the offset
values.
%(missing_param_doc)s
args : extra arguments
These are passed to the model
kwargs : extra keyword arguments
These are passed to the model with two exceptions. `dep_data`
is processed as described below. The ``eval_env`` keyword is
passed to patsy. It can be either a
:class:`patsy:patsy.EvalEnvironment` object or an integer
indicating the depth of the namespace to use. For example, the
default ``eval_env=0`` uses the calling namespace.
If you wish to use a "clean" environment set ``eval_env=-1``.
Optional arguments
------------------
dep_data : str or array_like
Data used for estimating the dependence structure. See
specific dependence structure classes (e.g. Nested) for
details. If `dep_data` is a string, it is interpreted as
a formula that is applied to `data`. If it is an array, it
must be an array of strings corresponding to column names in
`data`. Otherwise it must be an array-like with the same
number of rows as data.
Returns
-------
model : GEE model instance
Notes
-----
`data` must define __getitem__ with the keys in the formula
terms args and kwargs are passed on to the model
instantiation. E.g., a numpy structured or rec array, a
dictionary, or a pandas DataFrame.
""" % {'missing_param_doc': base._missing_param_doc}
groups_name = "Groups"
if isinstance(groups, str):
groups_name = groups
groups = data[groups]
if isinstance(time, str):
time = data[time]
if isinstance(offset, str):
offset = data[offset]
if isinstance(exposure, str):
exposure = data[exposure]
dep_data = kwargs.get("dep_data")
dep_data_names = None
if dep_data is not None:
if isinstance(dep_data, str):
dep_data = patsy.dmatrix(dep_data, data,
return_type='dataframe')
dep_data_names = dep_data.columns.tolist()
else:
dep_data_names = list(dep_data)
dep_data = data[dep_data]
kwargs["dep_data"] = np.asarray(dep_data)
family = None
if "family" in kwargs:
family = kwargs["family"]
del kwargs["family"]
model = super(GEE, cls).from_formula(formula, data=data, subset=subset,
groups=groups, time=time,
offset=offset,
exposure=exposure,
family=family,
*args, **kwargs)
if dep_data_names is not None:
model._dep_data_names = dep_data_names
model._groups_name = groups_name
return model
def cluster_list(self, array):
"""
Returns `array` split into subarrays corresponding to the
cluster structure.
"""
if array.ndim == 1:
return [np.array(array[self.group_indices[k]])
for k in self.group_labels]
else:
return [np.array(array[self.group_indices[k], :])
for k in self.group_labels]
def compare_score_test(self, submodel):
"""
Perform a score test for the given submodel against this model.
Parameters
----------
submodel : GEEResults instance
A fitted GEE model that is a submodel of this model.
Returns
-------
A dictionary with keys "statistic", "p-value", and "df",
containing the score test statistic, its chi^2 p-value,
and the degrees of freedom used to compute the p-value.
Notes
-----
The score test can be performed without calling 'fit' on the
larger model. The provided submodel must be obtained from a
fitted GEE.
This method performs the same score test as can be obtained by
fitting the GEE with a linear constraint and calling `score_test`
on the results.
References
----------
Xu Guo and Wei Pan (2002). "Small sample performance of the score
test in GEE".
http://www.sph.umn.edu/faculty1/wp-content/uploads/2012/11/rr2002-013.pdf
"""
# Since the model has not been fit, its scaletype has not been
# set. So give it the scaletype of the submodel.
self.scaletype = submodel.model.scaletype
# Check consistency between model and submodel (not a comprehensive
# check)
submod = submodel.model
if self.exog.shape[0] != submod.exog.shape[0]:
msg = "Model and submodel have different numbers of cases."
raise ValueError(msg)
if self.exog.shape[1] == submod.exog.shape[1]:
msg = "Model and submodel have the same number of variables"
warnings.warn(msg)
if not isinstance(self.family, type(submod.family)):
msg = "Model and submodel have different GLM families."
warnings.warn(msg)
if not isinstance(self.cov_struct, type(submod.cov_struct)):
warnings.warn("Model and submodel have different GEE covariance "
"structures.")
if not np.equal(self.weights, submod.weights).all():
msg = "Model and submodel should have the same weights."
warnings.warn(msg)
# Get the positions of the submodel variables in the
# parent model
qm, qc = _score_test_submodel(self, submodel.model)
if qm is None:
msg = "The provided model is not a submodel."
raise ValueError(msg)
# Embed the submodel params into a params vector for the
# parent model
params_ex = np.dot(qm, submodel.params)
# Attempt to preserve the state of the parent model
cov_struct_save = self.cov_struct
import copy
cached_means_save = copy.deepcopy(self.cached_means)
# Get the score vector of the submodel params in
# the parent model
self.cov_struct = submodel.cov_struct
self.update_cached_means(params_ex)
_, score = self._update_mean_params()
if score is None:
msg = "Singular matrix encountered in GEE score test"
warnings.warn(msg, ConvergenceWarning)
return None
if not hasattr(self, "ddof_scale"):
self.ddof_scale = self.exog.shape[1]
if not hasattr(self, "scaling_factor"):
self.scaling_factor = 1
_, ncov1, cmat = self._covmat()
score2 = np.dot(qc.T, score)
try:
amat = np.linalg.inv(ncov1)
except np.linalg.LinAlgError:
amat = np.linalg.pinv(ncov1)
bmat_11 = np.dot(qm.T, np.dot(cmat, qm))
bmat_22 = np.dot(qc.T, np.dot(cmat, qc))
bmat_12 = np.dot(qm.T, np.dot(cmat, qc))
amat_11 = np.dot(qm.T, np.dot(amat, qm))
amat_12 = np.dot(qm.T, np.dot(amat, qc))
try:
ab = np.linalg.solve(amat_11, bmat_12)
except np.linalg.LinAlgError:
ab = np.dot(np.linalg.pinv(amat_11), bmat_12)
score_cov = bmat_22 - np.dot(amat_12.T, ab)
try:
aa = np.linalg.solve(amat_11, amat_12)
except np.linalg.LinAlgError:
aa = np.dot(np.linalg.pinv(amat_11), amat_12)
score_cov -= np.dot(bmat_12.T, aa)
try:
ab = np.linalg.solve(amat_11, bmat_11)
except np.linalg.LinAlgError:
ab = np.dot(np.linalg.pinv(amat_11), bmat_11)
try:
aa = np.linalg.solve(amat_11, amat_12)
except np.linalg.LinAlgError:
aa = np.dot(np.linalg.pinv(amat_11), amat_12)
score_cov += np.dot(amat_12.T, np.dot(ab, aa))
# Attempt to restore state
self.cov_struct = cov_struct_save
self.cached_means = cached_means_save
from scipy.stats.distributions import chi2
try:
sc2 = np.linalg.solve(score_cov, score2)
except np.linalg.LinAlgError:
sc2 = np.dot(np.linalg.pinv(score_cov), score2)
score_statistic = np.dot(score2, sc2)
score_df = len(score2)
score_pvalue = 1 - chi2.cdf(score_statistic, score_df)
return {"statistic": score_statistic,
"df": score_df,
"p-value": score_pvalue}
def estimate_scale(self):
"""
Estimate the dispersion/scale.
"""
if self.scaletype is None:
if isinstance(self.family, (families.Binomial, families.Poisson,
families.NegativeBinomial,
_Multinomial)):
return 1.
elif isinstance(self.scaletype, float):
return np.array(self.scaletype)
endog = self.endog_li
cached_means = self.cached_means
nobs = self.nobs
varfunc = self.family.variance
scale = 0.
fsum = 0.
for i in range(self.num_group):
if len(endog[i]) == 0:
continue
expval, _ = cached_means[i]
sdev = np.sqrt(varfunc(expval))
resid = (endog[i] - expval) / sdev
if self.weights is not None:
f = self.weights_li[i]
scale += np.sum(f * (resid ** 2))
fsum += f.sum()
else:
scale += np.sum(resid ** 2)
fsum += len(resid)
scale /= (fsum * (nobs - self.ddof_scale) / float(nobs))
return scale
def mean_deriv(self, exog, lin_pred):
"""
Derivative of the expected endog with respect to the parameters.
Parameters
----------
exog : array_like
The exogeneous data at which the derivative is computed.
lin_pred : array_like
The values of the linear predictor.
Returns
-------
The value of the derivative of the expected endog with respect
to the parameter vector.
Notes
-----
If there is an offset or exposure, it should be added to
`lin_pred` prior to calling this function.
"""
idl = self.family.link.inverse_deriv(lin_pred)
dmat = exog * idl[:, None]
return dmat
def mean_deriv_exog(self, exog, params, offset_exposure=None):
"""
Derivative of the expected endog with respect to exog.
Parameters
----------
exog : array_like
Values of the independent variables at which the derivative
is calculated.
params : array_like
Parameter values at which the derivative is calculated.
offset_exposure : array_like, optional
Combined offset and exposure.
Returns
-------
The derivative of the expected endog with respect to exog.
"""
lin_pred = np.dot(exog, params)
if offset_exposure is not None:
lin_pred += offset_exposure
idl = self.family.link.inverse_deriv(lin_pred)
dmat = np.outer(idl, params)
return dmat
def _update_mean_params(self):
"""
Returns
-------
update : array_like
The update vector such that params + update is the next
iterate when solving the score equations.
score : array_like
The current value of the score equations, not
incorporating the scale parameter. If desired,
multiply this vector by the scale parameter to
incorporate the scale.
"""
endog = self.endog_li
exog = self.exog_li
weights = getattr(self, "weights_li", None)
cached_means = self.cached_means
varfunc = self.family.variance
bmat, score = 0, 0
for i in range(self.num_group):
expval, lpr = cached_means[i]
resid = endog[i] - expval
dmat = self.mean_deriv(exog[i], lpr)
sdev = np.sqrt(varfunc(expval))
if weights is not None:
w = weights[i]
wresid = resid * w
wdmat = dmat * w[:, None]
else:
wresid = resid
wdmat = dmat
rslt = self.cov_struct.covariance_matrix_solve(
expval, i, sdev, (wdmat, wresid))
if rslt is None:
return None, None
vinv_d, vinv_resid = tuple(rslt)
bmat += np.dot(dmat.T, vinv_d)
score += np.dot(dmat.T, vinv_resid)
try:
update = np.linalg.solve(bmat, score)
except np.linalg.LinAlgError:
update = np.dot(np.linalg.pinv(bmat), score)
self._fit_history["cov_adjust"].append(
self.cov_struct.cov_adjust)
return update, score
def update_cached_means(self, mean_params):
"""
cached_means should always contain the most recent calculation
of the group-wise mean vectors. This function should be
called every time the regression parameters are changed, to
keep the cached means up to date.
"""
endog = self.endog_li
exog = self.exog_li
offset = self.offset_li
linkinv = self.family.link.inverse
self.cached_means = []
for i in range(self.num_group):
if len(endog[i]) == 0:
continue
lpr = np.dot(exog[i], mean_params)
if offset is not None:
lpr += offset[i]
expval = linkinv(lpr)
self.cached_means.append((expval, lpr))
def _covmat(self):
"""
Returns the sampling covariance matrix of the regression
parameters and related quantities.
Returns
-------
cov_robust : array_like
The robust, or sandwich estimate of the covariance, which
is meaningful even if the working covariance structure is
incorrectly specified.
cov_naive : array_like
The model-based estimate of the covariance, which is
meaningful if the covariance structure is correctly
specified.
cmat : array_like
The center matrix of the sandwich expression, used in
obtaining score test results.
"""
endog = self.endog_li
exog = self.exog_li
weights = getattr(self, "weights_li", None)
varfunc = self.family.variance
cached_means = self.cached_means
# Calculate the naive (model-based) and robust (sandwich)
# covariances.
bmat, cmat = 0, 0
for i in range(self.num_group):
expval, lpr = cached_means[i]
resid = endog[i] - expval
dmat = self.mean_deriv(exog[i], lpr)
sdev = np.sqrt(varfunc(expval))
if weights is not None:
w = weights[i]
wresid = resid * w
wdmat = dmat * w[:, None]
else:
wresid = resid
wdmat = dmat
rslt = self.cov_struct.covariance_matrix_solve(
expval, i, sdev, (wdmat, wresid))
if rslt is None:
return None, None, None, None
vinv_d, vinv_resid = tuple(rslt)
bmat += np.dot(dmat.T, vinv_d)
dvinv_resid = np.dot(dmat.T, vinv_resid)
cmat += np.outer(dvinv_resid, dvinv_resid)
scale = self.estimate_scale()
try:
bmati = np.linalg.inv(bmat)
except np.linalg.LinAlgError:
bmati = np.linalg.pinv(bmat)
cov_naive = bmati * scale
cov_robust = np.dot(bmati, np.dot(cmat, bmati))
cov_naive *= self.scaling_factor
cov_robust *= self.scaling_factor
return cov_robust, cov_naive, cmat
# Calculate the bias-corrected sandwich estimate of Mancl and
# DeRouen.
def _bc_covmat(self, cov_naive):
cov_naive = cov_naive / self.scaling_factor
endog = self.endog_li
exog = self.exog_li
varfunc = self.family.variance
cached_means = self.cached_means
scale = self.estimate_scale()
bcm = 0
for i in range(self.num_group):
expval, lpr = cached_means[i]
resid = endog[i] - expval
dmat = self.mean_deriv(exog[i], lpr)
sdev = np.sqrt(varfunc(expval))
rslt = self.cov_struct.covariance_matrix_solve(
expval, i, sdev, (dmat,))
if rslt is None:
return None
vinv_d = rslt[0]
vinv_d /= scale
hmat = np.dot(vinv_d, cov_naive)
hmat = np.dot(hmat, dmat.T).T
f = self.weights_li[i] if self.weights is not None else 1.
aresid = np.linalg.solve(np.eye(len(resid)) - hmat, resid)
rslt = self.cov_struct.covariance_matrix_solve(
expval, i, sdev, (aresid,))
if rslt is None:
return None
srt = rslt[0]
srt = f * np.dot(dmat.T, srt) / scale
bcm += np.outer(srt, srt)
cov_robust_bc = np.dot(cov_naive, np.dot(bcm, cov_naive))
cov_robust_bc *= self.scaling_factor
return cov_robust_bc
def _starting_params(self):
if np.isscalar(self._offset_exposure):
offset = None
else:
offset = self._offset_exposure
model = GLM(self.endog, self.exog, family=self.family,
offset=offset, freq_weights=self.weights)
result = model.fit()
return result.params
@Appender(_gee_fit_doc)
def fit(self, maxiter=60, ctol=1e-6, start_params=None,
params_niter=1, first_dep_update=0,
cov_type='robust', ddof_scale=None, scaling_factor=1.,
scale=None):
self.scaletype = scale
# Subtract this number from the total sample size when
# normalizing the scale parameter estimate.
if ddof_scale is None:
self.ddof_scale = self.exog.shape[1]
else:
if not ddof_scale >= 0:
raise ValueError(
"ddof_scale must be a non-negative number or None")
self.ddof_scale = ddof_scale
self.scaling_factor = scaling_factor
self._fit_history = defaultdict(list)
if self.weights is not None and cov_type == 'naive':
raise ValueError("when using weights, cov_type may not be naive")
if start_params is None:
mean_params = self._starting_params()
else:
start_params = np.asarray(start_params)
mean_params = start_params.copy()
self.update_cached_means(mean_params)
del_params = -1.
num_assoc_updates = 0
for itr in range(maxiter):
update, score = self._update_mean_params()
if update is None:
warnings.warn("Singular matrix encountered in GEE update",
ConvergenceWarning)
break
mean_params += update
self.update_cached_means(mean_params)
# L2 norm of the change in mean structure parameters at
# this iteration.
del_params = np.sqrt(np.sum(score ** 2))
self._fit_history['params'].append(mean_params.copy())
self._fit_history['score'].append(score)
self._fit_history['dep_params'].append(
self.cov_struct.dep_params)
# Do not exit until the association parameters have been
# updated at least once.
if (del_params < ctol and
(num_assoc_updates > 0 or self.update_dep is False)):
break
# Update the dependence structure
if (self.update_dep and (itr % params_niter) == 0
and (itr >= first_dep_update)):
self._update_assoc(mean_params)
num_assoc_updates += 1
if del_params >= ctol:
warnings.warn("Iteration limit reached prior to convergence",
IterationLimitWarning)
if mean_params is None:
warnings.warn("Unable to estimate GEE parameters.",
ConvergenceWarning)
return None
bcov, ncov, _ = self._covmat()
if bcov is None:
warnings.warn("Estimated covariance structure for GEE "
"estimates is singular", ConvergenceWarning)
return None
bc_cov = None
if cov_type == "bias_reduced":
bc_cov = self._bc_covmat(ncov)
if self.constraint is not None:
x = mean_params.copy()
mean_params, bcov = self._handle_constraint(mean_params, bcov)
if mean_params is None:
warnings.warn("Unable to estimate constrained GEE "
"parameters.", ConvergenceWarning)
return None
y, ncov = self._handle_constraint(x, ncov)
if y is None:
warnings.warn("Unable to estimate constrained GEE "
"parameters.", ConvergenceWarning)
return None
if bc_cov is not None:
y, bc_cov = self._handle_constraint(x, bc_cov)
if x is None:
warnings.warn("Unable to estimate constrained GEE "
"parameters.", ConvergenceWarning)
return None
scale = self.estimate_scale()
# kwargs to add to results instance, need to be available in __init__
res_kwds = dict(cov_type=cov_type,
cov_robust=bcov,
cov_naive=ncov,
cov_robust_bc=bc_cov)
# The superclass constructor will multiply the covariance
# matrix argument bcov by scale, which we do not want, so we
# divide bcov by the scale parameter here
results = GEEResults(self, mean_params, bcov / scale, scale,
cov_type=cov_type, use_t=False,
attr_kwds=res_kwds)
# attributes not needed during results__init__
results.fit_history = self._fit_history
self.fit_history = defaultdict(list)
results.score_norm = del_params
results.converged = (del_params < ctol)
results.cov_struct = self.cov_struct
results.params_niter = params_niter
results.first_dep_update = first_dep_update
results.ctol = ctol
results.maxiter = maxiter
# These will be copied over to subclasses when upgrading.
results._props = ["cov_type", "use_t",
"cov_params_default", "cov_robust",
"cov_naive", "cov_robust_bc",
"fit_history",
"score_norm", "converged", "cov_struct",
"params_niter", "first_dep_update", "ctol",
"maxiter"]
return GEEResultsWrapper(results)
def _update_regularized(self, params, pen_wt, scad_param, eps):
sn, hm = 0, 0
for i in range(self.num_group):
expval, _ = self.cached_means[i]
resid = self.endog_li[i] - expval
sdev = np.sqrt(self.family.variance(expval))
ex = self.exog_li[i] * sdev[:, None]**2
rslt = self.cov_struct.covariance_matrix_solve(
expval, i, sdev, (resid, ex))
sn0 = rslt[0]
sn += np.dot(ex.T, sn0)
hm0 = rslt[1]
hm += np.dot(ex.T, hm0)
# Wang et al. divide sn here by num_group, but that
# seems to be incorrect
ap = np.abs(params)
clipped = np.clip(scad_param * pen_wt - ap, 0, np.inf)
en = pen_wt * clipped * (ap > pen_wt)
en /= (scad_param - 1) * pen_wt
en += pen_wt * (ap <= pen_wt)
en /= eps + ap
hm.flat[::hm.shape[0] + 1] += self.num_group * en
sn -= self.num_group * en * params
try:
update = np.linalg.solve(hm, sn)
except np.linalg.LinAlgError:
update = np.dot(np.linalg.pinv(hm), sn)
msg = "Encountered singularity in regularized GEE update"
warnings.warn(msg)
hm *= self.estimate_scale()
return update, hm
def _regularized_covmat(self, mean_params):
self.update_cached_means(mean_params)
ma = 0
for i in range(self.num_group):
expval, _ = self.cached_means[i]
resid = self.endog_li[i] - expval
sdev = np.sqrt(self.family.variance(expval))
ex = self.exog_li[i] * sdev[:, None]**2
rslt = self.cov_struct.covariance_matrix_solve(
expval, i, sdev, (resid,))
ma0 = np.dot(ex.T, rslt[0])
ma += np.outer(ma0, ma0)
return ma
def fit_regularized(self, pen_wt, scad_param=3.7, maxiter=100,
ddof_scale=None, update_assoc=5,
ctol=1e-5, ztol=1e-3, eps=1e-6, scale=None):
"""
Regularized estimation for GEE.
Parameters
----------
pen_wt : float
The penalty weight (a non-negative scalar).
scad_param : float
Non-negative scalar determining the shape of the Scad
penalty.
maxiter : int
The maximum number of iterations.
ddof_scale : int
Value to subtract from `nobs` when calculating the
denominator degrees of freedom for t-statistics, defaults
to the number of columns in `exog`.
update_assoc : int
The dependence parameters are updated every `update_assoc`
iterations of the mean structure parameter updates.
ctol : float
Convergence criterion, default is one order of magnitude
smaller than proposed in section 3.1 of Wang et al.
ztol : float
Coefficients smaller than this value are treated as
being zero, default is based on section 5 of Wang et al.
eps : non-negative scalar
Numerical constant, see section 3.2 of Wang et al.
scale : float or string
If a float, this value is used as the scale parameter.
If "X2", the scale parameter is always estimated using
Pearson's chi-square method (e.g. as in a quasi-Poisson
analysis). If None, the default approach for the family
is used to estimate the scale parameter.
Returns
-------
GEEResults instance. Note that not all methods of the results
class make sense when the model has been fit with regularization.
Notes
-----
This implementation assumes that the link is canonical.
References
----------
Wang L, Zhou J, Qu A. (2012). Penalized generalized estimating
equations for high-dimensional longitudinal data analysis.
Biometrics. 2012 Jun;68(2):353-60.
doi: 10.1111/j.1541-0420.2011.01678.x.
https://www.ncbi.nlm.nih.gov/pubmed/21955051
http://users.stat.umn.edu/~wangx346/research/GEE_selection.pdf
"""
self.scaletype = scale
mean_params = np.zeros(self.exog.shape[1])
self.update_cached_means(mean_params)
converged = False
fit_history = defaultdict(list)
# Subtract this number from the total sample size when
# normalizing the scale parameter estimate.
if ddof_scale is None:
self.ddof_scale = self.exog.shape[1]
else:
if not ddof_scale >= 0:
raise ValueError(
"ddof_scale must be a non-negative number or None")
self.ddof_scale = ddof_scale
# Keep this private for now. In some cases the early steps are
# very small so it seems necessary to ensure a certain minimum
# number of iterations before testing for convergence.
miniter = 20
for itr in range(maxiter):
update, hm = self._update_regularized(
mean_params, pen_wt, scad_param, eps)
if update is None:
msg = "Singular matrix encountered in regularized GEE update",
warnings.warn(msg, ConvergenceWarning)
break
if itr > miniter and np.sqrt(np.sum(update**2)) < ctol:
converged = True
break
mean_params += update
fit_history['params'].append(mean_params.copy())
self.update_cached_means(mean_params)
if itr != 0 and (itr % update_assoc == 0):
self._update_assoc(mean_params)
if not converged:
msg = "GEE.fit_regularized did not converge"
warnings.warn(msg)
mean_params[np.abs(mean_params) < ztol] = 0
self._update_assoc(mean_params)
ma = self._regularized_covmat(mean_params)
cov = np.linalg.solve(hm, ma)
cov = np.linalg.solve(hm, cov.T)
# kwargs to add to results instance, need to be available in __init__
res_kwds = dict(cov_type="robust", cov_robust=cov)
scale = self.estimate_scale()
rslt = GEEResults(self, mean_params, cov, scale,
regularized=True, attr_kwds=res_kwds)
rslt.fit_history = fit_history
return GEEResultsWrapper(rslt)
def _handle_constraint(self, mean_params, bcov):
"""
Expand the parameter estimate `mean_params` and covariance matrix
`bcov` to the coordinate system of the unconstrained model.
Parameters
----------
mean_params : array_like
A parameter vector estimate for the reduced model.
bcov : array_like
The covariance matrix of mean_params.
Returns
-------
mean_params : array_like
The input parameter vector mean_params, expanded to the
coordinate system of the full model
bcov : array_like
The input covariance matrix bcov, expanded to the
coordinate system of the full model
"""
# The number of variables in the full model
red_p = len(mean_params)
full_p = self.constraint.lhs.shape[1]
mean_params0 = np.r_[mean_params, np.zeros(full_p - red_p)]
# Get the score vector under the full model.
save_exog_li = self.exog_li
self.exog_li = self.constraint.exog_fulltrans_li
import copy
save_cached_means = copy.deepcopy(self.cached_means)
self.update_cached_means(mean_params0)
_, score = self._update_mean_params()
if score is None:
warnings.warn("Singular matrix encountered in GEE score test",
ConvergenceWarning)
return None, None
_, ncov1, cmat = self._covmat()
scale = self.estimate_scale()
cmat = cmat / scale ** 2
score2 = score[red_p:] / scale
amat = np.linalg.inv(ncov1)
bmat_11 = cmat[0:red_p, 0:red_p]
bmat_22 = cmat[red_p:, red_p:]
bmat_12 = cmat[0:red_p, red_p:]
amat_11 = amat[0:red_p, 0:red_p]
amat_12 = amat[0:red_p, red_p:]
score_cov = bmat_22 - np.dot(amat_12.T,
np.linalg.solve(amat_11, bmat_12))
score_cov -= np.dot(bmat_12.T,
np.linalg.solve(amat_11, amat_12))
score_cov += np.dot(amat_12.T,
np.dot(np.linalg.solve(amat_11, bmat_11),
np.linalg.solve(amat_11, amat_12)))
from scipy.stats.distributions import chi2
score_statistic = np.dot(score2,
np.linalg.solve(score_cov, score2))
score_df = len(score2)
score_pvalue = 1 - chi2.cdf(score_statistic, score_df)
self.score_test_results = {"statistic": score_statistic,
"df": score_df,
"p-value": score_pvalue}
mean_params = self.constraint.unpack_param(mean_params)
bcov = self.constraint.unpack_cov(bcov)
self.exog_li = save_exog_li
self.cached_means = save_cached_means
self.exog = self.constraint.restore_exog()
return mean_params, bcov
def _update_assoc(self, params):
"""
Update the association parameters
"""
self.cov_struct.update(params)
def _derivative_exog(self, params, exog=None, transform='dydx',
dummy_idx=None, count_idx=None):
"""
For computing marginal effects, returns dF(XB) / dX where F(.)
is the fitted mean.
transform can be 'dydx', 'dyex', 'eydx', or 'eyex'.
Not all of these make sense in the presence of discrete regressors,
but checks are done in the results in get_margeff.
"""
# This form should be appropriate for group 1 probit, logit,
# logistic, cloglog, heckprob, xtprobit.
offset_exposure = None
if exog is None:
exog = self.exog
offset_exposure = self._offset_exposure
margeff = self.mean_deriv_exog(exog, params, offset_exposure)
if 'ex' in transform:
margeff *= exog
if 'ey' in transform:
margeff /= self.predict(params, exog)[:, None]
if count_idx is not None:
from statsmodels.discrete.discrete_margins import (
_get_count_effects)
margeff = _get_count_effects(margeff, exog, count_idx, transform,
self, params)
if dummy_idx is not None:
from statsmodels.discrete.discrete_margins import (
_get_dummy_effects)
margeff = _get_dummy_effects(margeff, exog, dummy_idx, transform,
self, params)
return margeff
def qic(self, params, scale, cov_params):
"""
Returns quasi-information criteria and quasi-likelihood values.
Parameters
----------
params : array_like
The GEE estimates of the regression parameters.
scale : scalar
Estimated scale parameter
cov_params : array_like
An estimate of the covariance matrix for the
model parameters. Conventionally this is the robust
covariance matrix.
Returns
-------
ql : scalar
The quasi-likelihood value
qic : scalar
A QIC that can be used to compare the mean and covariance
structures of the model.
qicu : scalar
A simplified QIC that can be used to compare mean structures
but not covariance structures
Notes
-----
The quasi-likelihood used here is obtained by numerically evaluating
Wedderburn's integral representation of the quasi-likelihood function.
This approach is valid for all families and links. Many other
packages use analytical expressions for quasi-likelihoods that are
valid in special cases where the link function is canonical. These
analytical expressions may omit additive constants that only depend
on the data. Therefore, the numerical values of our QL and QIC values
will differ from the values reported by other packages. However only
the differences between two QIC values calculated for different models
using the same data are meaningful. Our QIC should produce the same
QIC differences as other software.
When using the QIC for models with unknown scale parameter, use a
common estimate of the scale parameter for all models being compared.
References
----------
.. [*] W. Pan (2001). Akaike's information criterion in generalized
estimating equations. Biometrics (57) 1.
"""
varfunc = self.family.variance
means = []
omega = 0.0
# omega^-1 is the model-based covariance assuming independence
for i in range(self.num_group):
expval, lpr = self.cached_means[i]
means.append(expval)
dmat = self.mean_deriv(self.exog_li[i], lpr)
omega += np.dot(dmat.T, dmat) / scale
means = np.concatenate(means)
# The quasi-likelihood, use change of variables so the integration is
# from -1 to 1.
endog_li = np.concatenate(self.endog_li)
du = means - endog_li
nstep = 10000
qv = np.empty(nstep)
xv = np.linspace(-0.99999, 1, nstep)
for i, g in enumerate(xv):
u = endog_li + (g + 1) * du / 2.0
vu = varfunc(u)
qv[i] = -np.sum(du**2 * (g + 1) / vu)
qv /= (4 * scale)
from scipy.integrate import trapz
ql = trapz(qv, dx=xv[1] - xv[0])
qicu = -2 * ql + 2 * self.exog.shape[1]
qic = -2 * ql + 2 * np.trace(np.dot(omega, cov_params))
return ql, qic, qicu
class GEEResults(GLMResults):
__doc__ = (
"This class summarizes the fit of a marginal regression model "
"using GEE.\n" + _gee_results_doc)
def __init__(self, model, params, cov_params, scale,
cov_type='robust', use_t=False, regularized=False,
**kwds):
super(GEEResults, self).__init__(
model, params, normalized_cov_params=cov_params,
scale=scale)
# not added by super
self.df_resid = model.df_resid
self.df_model = model.df_model
self.family = model.family
attr_kwds = kwds.pop('attr_kwds', {})
self.__dict__.update(attr_kwds)
# we do not do this if the cov_type has already been set
# subclasses can set it through attr_kwds
if not (hasattr(self, 'cov_type') and
hasattr(self, 'cov_params_default')):
self.cov_type = cov_type # keep alias
covariance_type = self.cov_type.lower()
allowed_covariances = ["robust", "naive", "bias_reduced"]
if covariance_type not in allowed_covariances:
msg = ("GEE: `cov_type` must be one of " +
", ".join(allowed_covariances))
raise ValueError(msg)
if cov_type == "robust":
cov = self.cov_robust
elif cov_type == "naive":
cov = self.cov_naive
elif cov_type == "bias_reduced":
cov = self.cov_robust_bc
self.cov_params_default = cov
else:
if self.cov_type != cov_type:
raise ValueError('cov_type in argument is different from '
'already attached cov_type')
@cache_readonly
def resid(self):
"""
The response residuals.
"""
return self.resid_response
def standard_errors(self, cov_type="robust"):
"""
This is a convenience function that returns the standard
errors for any covariance type. The value of `bse` is the
standard errors for whichever covariance type is specified as
an argument to `fit` (defaults to "robust").
Parameters
----------
cov_type : str
One of "robust", "naive", or "bias_reduced". Determines
the covariance used to compute standard errors. Defaults
to "robust".
"""
# Check covariance_type
covariance_type = cov_type.lower()
allowed_covariances = ["robust", "naive", "bias_reduced"]
if covariance_type not in allowed_covariances:
msg = ("GEE: `covariance_type` must be one of " +
", ".join(allowed_covariances))
raise ValueError(msg)
if covariance_type == "robust":
return np.sqrt(np.diag(self.cov_robust))
elif covariance_type == "naive":
return np.sqrt(np.diag(self.cov_naive))
elif covariance_type == "bias_reduced":
if self.cov_robust_bc is None:
raise ValueError(
"GEE: `bias_reduced` covariance not available")
return np.sqrt(np.diag(self.cov_robust_bc))
# Need to override to allow for different covariance types.
@cache_readonly
def bse(self):
return self.standard_errors(self.cov_type)
def score_test(self):
"""
Return the results of a score test for a linear constraint.
Returns
-------
Adictionary containing the p-value, the test statistic,
and the degrees of freedom for the score test.
Notes
-----
See also GEE.compare_score_test for an alternative way to perform
a score test. GEEResults.score_test is more general, in that it
supports testing arbitrary linear equality constraints. However
GEE.compare_score_test might be easier to use when comparing
two explicit models.
References
----------
Xu Guo and Wei Pan (2002). "Small sample performance of the score
test in GEE".
http://www.sph.umn.edu/faculty1/wp-content/uploads/2012/11/rr2002-013.pdf
"""
if not hasattr(self.model, "score_test_results"):
msg = "score_test on results instance only available when "
msg += " model was fit with constraints"
raise ValueError(msg)
return self.model.score_test_results
@cache_readonly
def resid_split(self):
"""
Returns the residuals, the endogeneous data minus the fitted
values from the model. The residuals are returned as a list
of arrays containing the residuals for each cluster.
"""
sresid = []
for v in self.model.group_labels:
ii = self.model.group_indices[v]
sresid.append(self.resid[ii])
return sresid
@cache_readonly
def resid_centered(self):
"""
Returns the residuals centered within each group.
"""
cresid = self.resid.copy()
for v in self.model.group_labels:
ii = self.model.group_indices[v]
cresid[ii] -= cresid[ii].mean()
return cresid
@cache_readonly
def resid_centered_split(self):
"""
Returns the residuals centered within each group. The
residuals are returned as a list of arrays containing the
centered residuals for each cluster.
"""
sresid = []
for v in self.model.group_labels:
ii = self.model.group_indices[v]
sresid.append(self.centered_resid[ii])
return sresid
def qic(self, scale=None):
"""
Returns the QIC and QICu information criteria.
For families with a scale parameter (e.g. Gaussian), provide
as the scale argument the estimated scale from the largest
model under consideration.
If the scale parameter is not provided, the estimated scale
parameter is used. Doing this does not allow comparisons of
QIC values between models.
"""
# It is easy to forget to set the scale parameter. Sometimes
# this is intentional, so we warn.
if scale is None:
warnings.warn("QIC values obtained using scale=None are not "
"appropriate for comparing models")
if scale is None:
scale = self.scale
_, qic, qicu = self.model.qic(self.params, scale,
self.cov_params())
return qic, qicu
# FIXME: alias to be removed, temporary backwards compatibility
split_resid = resid_split
centered_resid = resid_centered
split_centered_resid = resid_centered_split
@Appender(_plot_added_variable_doc % {'extra_params_doc': ''})
def plot_added_variable(self, focus_exog, resid_type=None,
use_glm_weights=True, fit_kwargs=None,
ax=None):
from statsmodels.graphics.regressionplots import plot_added_variable
fig = plot_added_variable(self, focus_exog,
resid_type=resid_type,
use_glm_weights=use_glm_weights,
fit_kwargs=fit_kwargs, ax=ax)
return fig
@Appender(_plot_partial_residuals_doc % {'extra_params_doc': ''})
def plot_partial_residuals(self, focus_exog, ax=None):
from statsmodels.graphics.regressionplots import plot_partial_residuals
return plot_partial_residuals(self, focus_exog, ax=ax)
@Appender(_plot_ceres_residuals_doc % {'extra_params_doc': ''})
def plot_ceres_residuals(self, focus_exog, frac=0.66, cond_means=None,
ax=None):
from statsmodels.graphics.regressionplots import plot_ceres_residuals
return plot_ceres_residuals(self, focus_exog, frac,
cond_means=cond_means, ax=ax)
def conf_int(self, alpha=.05, cols=None, cov_type=None):
"""
Returns confidence intervals for the fitted parameters.
Parameters
----------
alpha : float, optional
The `alpha` level for the confidence interval. i.e., The
default `alpha` = .05 returns a 95% confidence interval.
cols : array_like, optional
`cols` specifies which confidence intervals to return
cov_type : str
The covariance type used for computing standard errors;
must be one of 'robust', 'naive', and 'bias reduced'.
See `GEE` for details.
Notes
-----
The confidence interval is based on the Gaussian distribution.
"""
# super does not allow to specify cov_type and method is not
# implemented,
# FIXME: remove this method here
if cov_type is None:
bse = self.bse
else:
bse = self.standard_errors(cov_type=cov_type)
params = self.params
dist = stats.norm
q = dist.ppf(1 - alpha / 2)
if cols is None:
lower = self.params - q * bse
upper = self.params + q * bse
else:
cols = np.asarray(cols)
lower = params[cols] - q * bse[cols]
upper = params[cols] + q * bse[cols]
return np.asarray(lzip(lower, upper))
def summary(self, yname=None, xname=None, title=None, alpha=.05):
"""
Summarize the GEE regression results
Parameters
----------
yname : str, optional
Default is `y`
xname : list[str], optional
Names for the exogenous variables, default is `var_#` for ## in
the number of regressors. Must match the number of parameters in
the model
title : str, optional
Title for the top table. If not None, then this replaces
the default title
alpha : float
significance level for the confidence intervals
cov_type : str
The covariance type used to compute the standard errors;
one of 'robust' (the usual robust sandwich-type covariance
estimate), 'naive' (ignores dependence), and 'bias
reduced' (the Mancl/DeRouen estimate).
Returns
-------
smry : Summary instance
this holds the summary tables and text, which can be
printed or converted to various output formats.
See Also
--------
statsmodels.iolib.summary.Summary : class to hold summary results
"""
top_left = [('Dep. Variable:', None),
('Model:', None),
('Method:', ['Generalized']),
('', ['Estimating Equations']),
('Family:', [self.model.family.__class__.__name__]),
('Dependence structure:',
[self.model.cov_struct.__class__.__name__]),
('Date:', None),
('Covariance type: ', [self.cov_type, ])
]
NY = [len(y) for y in self.model.endog_li]
top_right = [('No. Observations:', [sum(NY)]),
('No. clusters:', [len(self.model.endog_li)]),
('Min. cluster size:', [min(NY)]),
('Max. cluster size:', [max(NY)]),
('Mean cluster size:', ["%.1f" % np.mean(NY)]),
('Num. iterations:', ['%d' %
len(self.fit_history['params'])]),
('Scale:', ["%.3f" % self.scale]),
('Time:', None),
]
# The skew of the residuals
skew1 = stats.skew(self.resid)
kurt1 = stats.kurtosis(self.resid)
skew2 = stats.skew(self.centered_resid)
kurt2 = stats.kurtosis(self.centered_resid)
diagn_left = [('Skew:', ["%12.4f" % skew1]),
('Centered skew:', ["%12.4f" % skew2])]
diagn_right = [('Kurtosis:', ["%12.4f" % kurt1]),
('Centered kurtosis:', ["%12.4f" % kurt2])
]
if title is None:
title = self.model.__class__.__name__ + ' ' +\
"Regression Results"
# Override the exog variable names if xname is provided as an
# argument.
if xname is None:
xname = self.model.exog_names
if yname is None:
yname = self.model.endog_names
# Create summary table instance
from statsmodels.iolib.summary import Summary
smry = Summary()
smry.add_table_2cols(self, gleft=top_left, gright=top_right,
yname=yname, xname=xname,
title=title)
smry.add_table_params(self, yname=yname, xname=xname,
alpha=alpha, use_t=False)
smry.add_table_2cols(self, gleft=diagn_left,
gright=diagn_right, yname=yname,
xname=xname, title="")
return smry
def get_margeff(self, at='overall', method='dydx', atexog=None,
dummy=False, count=False):
"""Get marginal effects of the fitted model.
Parameters
----------
at : str, optional
Options are:
- 'overall', The average of the marginal effects at each
observation.
- 'mean', The marginal effects at the mean of each regressor.
- 'median', The marginal effects at the median of each regressor.
- 'zero', The marginal effects at zero for each regressor.
- 'all', The marginal effects at each observation. If `at` is 'all'
only margeff will be available.
Note that if `exog` is specified, then marginal effects for all
variables not specified by `exog` are calculated using the `at`
option.
method : str, optional
Options are:
- 'dydx' - dy/dx - No transformation is made and marginal effects
are returned. This is the default.
- 'eyex' - estimate elasticities of variables in `exog` --
d(lny)/d(lnx)
- 'dyex' - estimate semi-elasticity -- dy/d(lnx)
- 'eydx' - estimate semi-elasticity -- d(lny)/dx
Note that tranformations are done after each observation is
calculated. Semi-elasticities for binary variables are computed
using the midpoint method. 'dyex' and 'eyex' do not make sense
for discrete variables.
atexog : array_like, optional
Optionally, you can provide the exogenous variables over which to
get the marginal effects. This should be a dictionary with the key
as the zero-indexed column number and the value of the dictionary.
Default is None for all independent variables less the constant.
dummy : bool, optional
If False, treats binary variables (if present) as continuous. This
is the default. Else if True, treats binary variables as
changing from 0 to 1. Note that any variable that is either 0 or 1
is treated as binary. Each binary variable is treated separately
for now.
count : bool, optional
If False, treats count variables (if present) as continuous. This
is the default. Else if True, the marginal effect is the
change in probabilities when each observation is increased by one.
Returns
-------
effects : ndarray
the marginal effect corresponding to the input options
Notes
-----
When using after Poisson, returns the expected number of events
per period, assuming that the model is loglinear.
"""
if self.model.constraint is not None:
warnings.warn("marginal effects ignore constraints",
ValueWarning)
return GEEMargins(self, (at, method, atexog, dummy, count))
def plot_isotropic_dependence(self, ax=None, xpoints=10,
min_n=50):
"""
Create a plot of the pairwise products of within-group
residuals against the corresponding time differences. This
plot can be used to assess the possible form of an isotropic
covariance structure.
Parameters
----------
ax : AxesSubplot
An axes on which to draw the graph. If None, new
figure and axes objects are created
xpoints : scalar or array_like
If scalar, the number of points equally spaced points on
the time difference axis used to define bins for
calculating local means. If an array, the specific points
that define the bins.
min_n : int
The minimum sample size in a bin for the mean residual
product to be included on the plot.
"""
from statsmodels.graphics import utils as gutils
resid = self.model.cluster_list(self.resid)
time = self.model.cluster_list(self.model.time)
# All within-group pairwise time distances (xdt) and the
# corresponding products of scaled residuals (xre).
xre, xdt = [], []
for re, ti in zip(resid, time):
ix = np.tril_indices(re.shape[0], 0)
re = re[ix[0]] * re[ix[1]] / self.scale ** 2
xre.append(re)
dists = np.sqrt(((ti[ix[0], :] - ti[ix[1], :]) ** 2).sum(1))
xdt.append(dists)
xre = np.concatenate(xre)
xdt = np.concatenate(xdt)
if ax is None:
fig, ax = gutils.create_mpl_ax(ax)
else:
fig = ax.get_figure()
# Convert to a correlation
ii = np.flatnonzero(xdt == 0)
v0 = np.mean(xre[ii])
xre /= v0
# Use the simple average to smooth, since fancier smoothers
# that trim and downweight outliers give biased results (we
# need the actual mean of a skewed distribution).
if np.isscalar(xpoints):
xpoints = np.linspace(0, max(xdt), xpoints)
dg = np.digitize(xdt, xpoints)
dgu = np.unique(dg)
hist = np.asarray([np.sum(dg == k) for k in dgu])
ii = np.flatnonzero(hist >= min_n)
dgu = dgu[ii]
dgy = np.asarray([np.mean(xre[dg == k]) for k in dgu])
dgx = np.asarray([np.mean(xdt[dg == k]) for k in dgu])
ax.plot(dgx, dgy, '-', color='orange', lw=5)
ax.set_xlabel("Time difference")
ax.set_ylabel("Product of scaled residuals")
return fig
def sensitivity_params(self, dep_params_first,
dep_params_last, num_steps):
"""
Refits the GEE model using a sequence of values for the
dependence parameters.
Parameters
----------
dep_params_first : array_like
The first dep_params in the sequence
dep_params_last : array_like
The last dep_params in the sequence
num_steps : int
The number of dep_params in the sequence
Returns
-------
results : array_like
The GEEResults objects resulting from the fits.
"""
model = self.model
import copy
cov_struct = copy.deepcopy(self.model.cov_struct)
# We are fixing the dependence structure in each run.
update_dep = model.update_dep
model.update_dep = False
dep_params = []
results = []
for x in np.linspace(0, 1, num_steps):
dp = x * dep_params_last + (1 - x) * dep_params_first
dep_params.append(dp)
model.cov_struct = copy.deepcopy(cov_struct)
model.cov_struct.dep_params = dp
rslt = model.fit(start_params=self.params,
ctol=self.ctol,
params_niter=self.params_niter,
first_dep_update=self.first_dep_update,
cov_type=self.cov_type)
results.append(rslt)
model.update_dep = update_dep
return results
# FIXME: alias to be removed, temporary backwards compatibility
params_sensitivity = sensitivity_params
class GEEResultsWrapper(lm.RegressionResultsWrapper):
_attrs = {
'centered_resid': 'rows',
}
_wrap_attrs = wrap.union_dicts(lm.RegressionResultsWrapper._wrap_attrs,
_attrs)
wrap.populate_wrapper(GEEResultsWrapper, GEEResults) # noqa:E305
class OrdinalGEE(GEE):
__doc__ = (
" Ordinal Response Marginal Regression Model using GEE\n" +
_gee_init_doc % {'extra_params': base._missing_param_doc,
'family_doc': _gee_ordinal_family_doc,
'example': _gee_ordinal_example,
'notes': _gee_nointercept})
def __init__(self, endog, exog, groups, time=None, family=None,
cov_struct=None, missing='none', offset=None,
dep_data=None, constraint=None, **kwargs):
if family is None:
family = families.Binomial()
else:
if not isinstance(family, families.Binomial):
raise ValueError("ordinal GEE must use a Binomial family")
if cov_struct is None:
cov_struct = cov_structs.OrdinalIndependence()
endog, exog, groups, time, offset = self.setup_ordinal(
endog, exog, groups, time, offset)
super(OrdinalGEE, self).__init__(endog, exog, groups, time,
family, cov_struct, missing,
offset, dep_data, constraint)
def setup_ordinal(self, endog, exog, groups, time, offset):
"""
Restructure ordinal data as binary indicators so that they can
be analyzed using Generalized Estimating Equations.
"""
self.endog_orig = endog.copy()
self.exog_orig = exog.copy()
self.groups_orig = groups.copy()
if offset is not None:
self.offset_orig = offset.copy()
else:
self.offset_orig = None
offset = np.zeros(len(endog))
if time is not None:
self.time_orig = time.copy()
else:
self.time_orig = None
time = np.zeros((len(endog), 1))
exog = np.asarray(exog)
endog = np.asarray(endog)
groups = np.asarray(groups)
time = np.asarray(time)
offset = np.asarray(offset)
# The unique outcomes, except the greatest one.
self.endog_values = np.unique(endog)
endog_cuts = self.endog_values[0:-1]
ncut = len(endog_cuts)
nrows = ncut * len(endog)
exog_out = np.zeros((nrows, exog.shape[1]),
dtype=np.float64)
endog_out = np.zeros(nrows, dtype=np.float64)
intercepts = np.zeros((nrows, ncut), dtype=np.float64)
groups_out = np.zeros(nrows, dtype=groups.dtype)
time_out = np.zeros((nrows, time.shape[1]),
dtype=np.float64)
offset_out = np.zeros(nrows, dtype=np.float64)
jrow = 0
zipper = zip(exog, endog, groups, time, offset)
for (exog_row, endog_value, group_value, time_value,
offset_value) in zipper:
# Loop over thresholds for the indicators
for thresh_ix, thresh in enumerate(endog_cuts):
exog_out[jrow, :] = exog_row
endog_out[jrow] = (int(endog_value > thresh))
intercepts[jrow, thresh_ix] = 1
groups_out[jrow] = group_value
time_out[jrow] = time_value
offset_out[jrow] = offset_value
jrow += 1
exog_out = np.concatenate((intercepts, exog_out), axis=1)
# exog column names, including intercepts
xnames = ["I(y>%.1f)" % v for v in endog_cuts]
if type(self.exog_orig) == pd.DataFrame:
xnames.extend(self.exog_orig.columns)
else:
xnames.extend(["x%d" % k for k in range(1, exog.shape[1] + 1)])
exog_out = pd.DataFrame(exog_out, columns=xnames)
# Preserve the endog name if there is one
if type(self.endog_orig) == pd.Series:
endog_out = pd.Series(endog_out, name=self.endog_orig.name)
return endog_out, exog_out, groups_out, time_out, offset_out
def _starting_params(self):
exposure = getattr(self, "exposure", None)
model = GEE(self.endog, self.exog, self.groups,
time=self.time, family=families.Binomial(),
offset=self.offset, exposure=exposure)
result = model.fit()
return result.params
@Appender(_gee_fit_doc)
def fit(self, maxiter=60, ctol=1e-6, start_params=None,
params_niter=1, first_dep_update=0,
cov_type='robust'):
rslt = super(OrdinalGEE, self).fit(maxiter, ctol, start_params,
params_niter, first_dep_update,
cov_type=cov_type)
rslt = rslt._results # use unwrapped instance
res_kwds = dict(((k, getattr(rslt, k)) for k in rslt._props))
# Convert the GEEResults to an OrdinalGEEResults
ord_rslt = OrdinalGEEResults(self, rslt.params,
rslt.cov_params() / rslt.scale,
rslt.scale,
cov_type=cov_type,
attr_kwds=res_kwds)
# for k in rslt._props:
# setattr(ord_rslt, k, getattr(rslt, k))
# TODO: document or delete
return OrdinalGEEResultsWrapper(ord_rslt)
class OrdinalGEEResults(GEEResults):
__doc__ = (
"This class summarizes the fit of a marginal regression model"
"for an ordinal response using GEE.\n"
+ _gee_results_doc)
def plot_distribution(self, ax=None, exog_values=None):
"""
Plot the fitted probabilities of endog in an ordinal model,
for specified values of the predictors.
Parameters
----------
ax : AxesSubplot
An axes on which to draw the graph. If None, new
figure and axes objects are created
exog_values : array_like
A list of dictionaries, with each dictionary mapping
variable names to values at which the variable is held
fixed. The values P(endog=y | exog) are plotted for all
possible values of y, at the given exog value. Variables
not included in a dictionary are held fixed at the mean
value.
Example:
--------
We have a model with covariates 'age' and 'sex', and wish to
plot the probabilities P(endog=y | exog) for males (sex=0) and
for females (sex=1), as separate paths on the plot. Since
'age' is not included below in the map, it is held fixed at
its mean value.
>>> ev = [{"sex": 1}, {"sex": 0}]
>>> rslt.distribution_plot(exog_values=ev)
"""
from statsmodels.graphics import utils as gutils
if ax is None:
fig, ax = gutils.create_mpl_ax(ax)
else:
fig = ax.get_figure()
# If no covariate patterns are specified, create one with all
# variables set to their mean values.
if exog_values is None:
exog_values = [{}, ]
exog_means = self.model.exog.mean(0)
ix_icept = [i for i, x in enumerate(self.model.exog_names) if
x.startswith("I(")]
for ev in exog_values:
for k in ev.keys():
if k not in self.model.exog_names:
raise ValueError("%s is not a variable in the model"
% k)
# Get the fitted probability for each level, at the given
# covariate values.
pr = []
for j in ix_icept:
xp = np.zeros_like(self.params)
xp[j] = 1.
for i, vn in enumerate(self.model.exog_names):
if i in ix_icept:
continue
# User-specified value
if vn in ev:
xp[i] = ev[vn]
# Mean value
else:
xp[i] = exog_means[i]
p = 1 / (1 + np.exp(-np.dot(xp, self.params)))
pr.append(p)
pr.insert(0, 1)
pr.append(0)
pr = np.asarray(pr)
prd = -np.diff(pr)
ax.plot(self.model.endog_values, prd, 'o-')
ax.set_xlabel("Response value")
ax.set_ylabel("Probability")
ax.set_ylim(0, 1)
return fig
def _score_test_submodel(par, sub):
"""
Return transformation matrices for design matrices.
Parameters
----------
par : instance
The parent model
sub : instance
The sub-model
Returns
-------
qm : array_like
Matrix mapping the design matrix of the parent to the design matrix
for the sub-model.
qc : array_like
Matrix mapping the design matrix of the parent to the orthogonal
complement of the columnspace of the submodel in the columnspace
of the parent.
Notes
-----
Returns None, None if the provided submodel is not actually a submodel.
"""
x1 = par.exog
x2 = sub.exog
u, s, vt = np.linalg.svd(x1, 0)
v = vt.T
# Get the orthogonal complement of col(x2) in col(x1).
a, _ = np.linalg.qr(x2)
a = u - np.dot(a, np.dot(a.T, u))
x2c, sb, _ = np.linalg.svd(a, 0)
x2c = x2c[:, sb > 1e-12]
# x1 * qm = x2
ii = np.flatnonzero(np.abs(s) > 1e-12)
qm = np.dot(v[:, ii], np.dot(u[:, ii].T, x2) / s[ii, None])
e = np.max(np.abs(x2 - np.dot(x1, qm)))
if e > 1e-8:
return None, None
# x1 * qc = x2c
qc = np.dot(v[:, ii], np.dot(u[:, ii].T, x2c) / s[ii, None])
return qm, qc
class OrdinalGEEResultsWrapper(GEEResultsWrapper):
pass
wrap.populate_wrapper(OrdinalGEEResultsWrapper, OrdinalGEEResults) # noqa:E305
class NominalGEE(GEE):
__doc__ = (
" Nominal Response Marginal Regression Model using GEE.\n" +
_gee_init_doc % {'extra_params': base._missing_param_doc,
'family_doc': _gee_nominal_family_doc,
'example': _gee_nominal_example,
'notes': _gee_nointercept})
def __init__(self, endog, exog, groups, time=None, family=None,
cov_struct=None, missing='none', offset=None,
dep_data=None, constraint=None, **kwargs):
endog, exog, groups, time, offset = self.setup_nominal(
endog, exog, groups, time, offset)
if family is None:
family = _Multinomial(self.ncut + 1)
if cov_struct is None:
cov_struct = cov_structs.NominalIndependence()
super(NominalGEE, self).__init__(
endog, exog, groups, time, family, cov_struct, missing,
offset, dep_data, constraint)
def _starting_params(self):
exposure = getattr(self, "exposure", None)
model = GEE(self.endog, self.exog, self.groups,
time=self.time, family=families.Binomial(),
offset=self.offset, exposure=exposure)
result = model.fit()
return result.params
def setup_nominal(self, endog, exog, groups, time, offset):
"""
Restructure nominal data as binary indicators so that they can
be analyzed using Generalized Estimating Equations.
"""
self.endog_orig = endog.copy()
self.exog_orig = exog.copy()
self.groups_orig = groups.copy()
if offset is not None:
self.offset_orig = offset.copy()
else:
self.offset_orig = None
offset = np.zeros(len(endog))
if time is not None:
self.time_orig = time.copy()
else:
self.time_orig = None
time = np.zeros((len(endog), 1))
exog = np.asarray(exog)
endog = np.asarray(endog)
groups = np.asarray(groups)
time = np.asarray(time)
offset = np.asarray(offset)
# The unique outcomes, except the greatest one.
self.endog_values = np.unique(endog)
endog_cuts = self.endog_values[0:-1]
ncut = len(endog_cuts)
self.ncut = ncut
nrows = len(endog_cuts) * exog.shape[0]
ncols = len(endog_cuts) * exog.shape[1]
exog_out = np.zeros((nrows, ncols), dtype=np.float64)
endog_out = np.zeros(nrows, dtype=np.float64)
groups_out = np.zeros(nrows, dtype=np.float64)
time_out = np.zeros((nrows, time.shape[1]),
dtype=np.float64)
offset_out = np.zeros(nrows, dtype=np.float64)
jrow = 0
zipper = zip(exog, endog, groups, time, offset)
for (exog_row, endog_value, group_value, time_value,
offset_value) in zipper:
# Loop over thresholds for the indicators
for thresh_ix, thresh in enumerate(endog_cuts):
u = np.zeros(len(endog_cuts), dtype=np.float64)
u[thresh_ix] = 1
exog_out[jrow, :] = np.kron(u, exog_row)
endog_out[jrow] = (int(endog_value == thresh))
groups_out[jrow] = group_value
time_out[jrow] = time_value
offset_out[jrow] = offset_value
jrow += 1
# exog names
if isinstance(self.exog_orig, pd.DataFrame):
xnames_in = self.exog_orig.columns
else:
xnames_in = ["x%d" % k for k in range(1, exog.shape[1] + 1)]
xnames = []
for tr in endog_cuts:
xnames.extend(["%s[%.1f]" % (v, tr) for v in xnames_in])
exog_out = pd.DataFrame(exog_out, columns=xnames)
exog_out = pd.DataFrame(exog_out, columns=xnames)
# Preserve endog name if there is one
if isinstance(self.endog_orig, pd.Series):
endog_out = pd.Series(endog_out, name=self.endog_orig.name)
return endog_out, exog_out, groups_out, time_out, offset_out
def mean_deriv(self, exog, lin_pred):
"""
Derivative of the expected endog with respect to the parameters.
Parameters
----------
exog : array_like
The exogeneous data at which the derivative is computed,
number of rows must be a multiple of `ncut`.
lin_pred : array_like
The values of the linear predictor, length must be multiple
of `ncut`.
Returns
-------
The derivative of the expected endog with respect to the
parameters.
"""
expval = np.exp(lin_pred)
# Reshape so that each row contains all the indicators
# corresponding to one multinomial observation.
expval_m = np.reshape(expval, (len(expval) // self.ncut,
self.ncut))
# The normalizing constant for the multinomial probabilities.
denom = 1 + expval_m.sum(1)
denom = np.kron(denom, np.ones(self.ncut, dtype=np.float64))
# The multinomial probabilities
mprob = expval / denom
# First term of the derivative: denom * expval' / denom^2 =
# expval' / denom.
dmat = mprob[:, None] * exog
# Second term of the derivative: -expval * denom' / denom^2
ddenom = expval[:, None] * exog
dmat -= mprob[:, None] * ddenom / denom[:, None]
return dmat
def mean_deriv_exog(self, exog, params, offset_exposure=None):
"""
Derivative of the expected endog with respect to exog for the
multinomial model, used in analyzing marginal effects.
Parameters
----------
exog : array_like
The exogeneous data at which the derivative is computed,
number of rows must be a multiple of `ncut`.
lpr : array_like
The linear predictor values, length must be multiple of
`ncut`.
Returns
-------
The value of the derivative of the expected endog with respect
to exog.
Notes
-----
offset_exposure must be set at None for the multinomial family.
"""
if offset_exposure is not None:
warnings.warn("Offset/exposure ignored for the multinomial family",
ValueWarning)
lpr = np.dot(exog, params)
expval = np.exp(lpr)
expval_m = np.reshape(expval, (len(expval) // self.ncut,
self.ncut))
denom = 1 + expval_m.sum(1)
denom = np.kron(denom, np.ones(self.ncut, dtype=np.float64))
bmat0 = np.outer(np.ones(exog.shape[0]), params)
# Masking matrix
qmat = []
for j in range(self.ncut):
ee = np.zeros(self.ncut, dtype=np.float64)
ee[j] = 1
qmat.append(np.kron(ee, np.ones(len(params) // self.ncut)))
qmat = np.array(qmat)
qmat = np.kron(np.ones((exog.shape[0] // self.ncut, 1)), qmat)
bmat = bmat0 * qmat
dmat = expval[:, None] * bmat / denom[:, None]
expval_mb = np.kron(expval_m, np.ones((self.ncut, 1)))
expval_mb = np.kron(expval_mb, np.ones((1, self.ncut)))
dmat -= expval[:, None] * (bmat * expval_mb) / denom[:, None] ** 2
return dmat
@Appender(_gee_fit_doc)
def fit(self, maxiter=60, ctol=1e-6, start_params=None,
params_niter=1, first_dep_update=0,
cov_type='robust'):
rslt = super(NominalGEE, self).fit(maxiter, ctol, start_params,
params_niter, first_dep_update,
cov_type=cov_type)
if rslt is None:
warnings.warn("GEE updates did not converge",
ConvergenceWarning)
return None
rslt = rslt._results # use unwrapped instance
res_kwds = dict(((k, getattr(rslt, k)) for k in rslt._props))
# Convert the GEEResults to a NominalGEEResults
nom_rslt = NominalGEEResults(self, rslt.params,
rslt.cov_params() / rslt.scale,
rslt.scale,
cov_type=cov_type,
attr_kwds=res_kwds)
# TODO: document or delete
# for k in rslt._props:
# setattr(nom_rslt, k, getattr(rslt, k))
return NominalGEEResultsWrapper(nom_rslt)
class NominalGEEResults(GEEResults):
__doc__ = (
"This class summarizes the fit of a marginal regression model"
"for a nominal response using GEE.\n"
+ _gee_results_doc)
def plot_distribution(self, ax=None, exog_values=None):
"""
Plot the fitted probabilities of endog in an nominal model,
for specified values of the predictors.
Parameters
----------
ax : AxesSubplot
An axes on which to draw the graph. If None, new
figure and axes objects are created
exog_values : array_like
A list of dictionaries, with each dictionary mapping
variable names to values at which the variable is held
fixed. The values P(endog=y | exog) are plotted for all
possible values of y, at the given exog value. Variables
not included in a dictionary are held fixed at the mean
value.
Example:
--------
We have a model with covariates 'age' and 'sex', and wish to
plot the probabilities P(endog=y | exog) for males (sex=0) and
for females (sex=1), as separate paths on the plot. Since
'age' is not included below in the map, it is held fixed at
its mean value.
>>> ex = [{"sex": 1}, {"sex": 0}]
>>> rslt.distribution_plot(exog_values=ex)
"""
from statsmodels.graphics import utils as gutils
if ax is None:
fig, ax = gutils.create_mpl_ax(ax)
else:
fig = ax.get_figure()
# If no covariate patterns are specified, create one with all
# variables set to their mean values.
if exog_values is None:
exog_values = [{}, ]
link = self.model.family.link.inverse
ncut = self.model.family.ncut
k = int(self.model.exog.shape[1] / ncut)
exog_means = self.model.exog.mean(0)[0:k]
exog_names = self.model.exog_names[0:k]
exog_names = [x.split("[")[0] for x in exog_names]
params = np.reshape(self.params,
(ncut, len(self.params) // ncut))
for ev in exog_values:
exog = exog_means.copy()
for k in ev.keys():
if k not in exog_names:
raise ValueError("%s is not a variable in the model"
% k)
ii = exog_names.index(k)
exog[ii] = ev[k]
lpr = np.dot(params, exog)
pr = link(lpr)
pr = np.r_[pr, 1 - pr.sum()]
ax.plot(self.model.endog_values, pr, 'o-')
ax.set_xlabel("Response value")
ax.set_ylabel("Probability")
ax.set_xticks(self.model.endog_values)
ax.set_xticklabels(self.model.endog_values)
ax.set_ylim(0, 1)
return fig
class NominalGEEResultsWrapper(GEEResultsWrapper):
pass
wrap.populate_wrapper(NominalGEEResultsWrapper, NominalGEEResults) # noqa:E305
class _MultinomialLogit(Link):
"""
The multinomial logit transform, only for use with GEE.
Notes
-----
The data are assumed coded as binary indicators, where each
observed multinomial value y is coded as I(y == S[0]), ..., I(y ==
S[-1]), where S is the set of possible response labels, excluding
the largest one. Thererefore functions in this class should only
be called using vector argument whose length is a multiple of |S|
= ncut, which is an argument to be provided when initializing the
class.
call and derivative use a private method _clean to trim p by 1e-10
so that p is in (0, 1)
"""
def __init__(self, ncut):
self.ncut = ncut
def inverse(self, lpr):
"""
Inverse of the multinomial logit transform, which gives the
expected values of the data as a function of the linear
predictors.
Parameters
----------
lpr : array_like (length must be divisible by `ncut`)
The linear predictors
Returns
-------
prob : ndarray
Probabilities, or expected values
"""
expval = np.exp(lpr)
denom = 1 + np.reshape(expval, (len(expval) // self.ncut,
self.ncut)).sum(1)
denom = np.kron(denom, np.ones(self.ncut, dtype=np.float64))
prob = expval / denom
return prob
class _Multinomial(families.Family):
"""
Pseudo-link function for fitting nominal multinomial models with
GEE. Not for use outside the GEE class.
"""
links = [_MultinomialLogit, ]
variance = varfuncs.binary
safe_links = [_MultinomialLogit, ]
def __init__(self, nlevels):
"""
Parameters
----------
nlevels : int
The number of distinct categories for the multinomial
distribution.
"""
self.initialize(nlevels)
def initialize(self, nlevels):
self.ncut = nlevels - 1
self.link = _MultinomialLogit(self.ncut)
class GEEMargins(object):
"""
Estimated marginal effects for a regression model fit with GEE.
Parameters
----------
results : GEEResults instance
The results instance of a fitted discrete choice model
args : tuple
Args are passed to `get_margeff`. This is the same as
results.get_margeff. See there for more information.
kwargs : dict
Keyword args are passed to `get_margeff`. This is the same as
results.get_margeff. See there for more information.
"""
def __init__(self, results, args, kwargs={}):
self._cache = {}
self.results = results
self.get_margeff(*args, **kwargs)
def _reset(self):
self._cache = {}
@cache_readonly
def tvalues(self):
_check_at_is_all(self.margeff_options)
return self.margeff / self.margeff_se
def summary_frame(self, alpha=.05):
"""
Returns a DataFrame summarizing the marginal effects.
Parameters
----------
alpha : float
Number between 0 and 1. The confidence intervals have the
probability 1-alpha.
Returns
-------
frame : DataFrames
A DataFrame summarizing the marginal effects.
"""
_check_at_is_all(self.margeff_options)
from pandas import DataFrame
names = [_transform_names[self.margeff_options['method']],
'Std. Err.', 'z', 'Pr(>|z|)',
'Conf. Int. Low', 'Cont. Int. Hi.']
ind = self.results.model.exog.var(0) != 0 # True if not a constant
exog_names = self.results.model.exog_names
var_names = [name for i, name in enumerate(exog_names) if ind[i]]
table = np.column_stack((self.margeff, self.margeff_se, self.tvalues,
self.pvalues, self.conf_int(alpha)))
return DataFrame(table, columns=names, index=var_names)
@cache_readonly
def pvalues(self):
_check_at_is_all(self.margeff_options)
return stats.norm.sf(np.abs(self.tvalues)) * 2
def conf_int(self, alpha=.05):
"""
Returns the confidence intervals of the marginal effects
Parameters
----------
alpha : float
Number between 0 and 1. The confidence intervals have the
probability 1-alpha.
Returns
-------
conf_int : ndarray
An array with lower, upper confidence intervals for the marginal
effects.
"""
_check_at_is_all(self.margeff_options)
me_se = self.margeff_se
q = stats.norm.ppf(1 - alpha / 2)
lower = self.margeff - q * me_se
upper = self.margeff + q * me_se
return np.asarray(lzip(lower, upper))
def summary(self, alpha=.05):
"""
Returns a summary table for marginal effects
Parameters
----------
alpha : float
Number between 0 and 1. The confidence intervals have the
probability 1-alpha.
Returns
-------
Summary : SummaryTable
A SummaryTable instance
"""
_check_at_is_all(self.margeff_options)
results = self.results
model = results.model
title = model.__class__.__name__ + " Marginal Effects"
method = self.margeff_options['method']
top_left = [('Dep. Variable:', [model.endog_names]),
('Method:', [method]),
('At:', [self.margeff_options['at']]), ]
from statsmodels.iolib.summary import (Summary, summary_params,
table_extend)
exog_names = model.exog_names[:] # copy
smry = Summary()
const_idx = model.data.const_idx
if const_idx is not None:
exog_names.pop(const_idx)
J = int(getattr(model, "J", 1))
if J > 1:
yname, yname_list = results._get_endog_name(model.endog_names,
None, all=True)
else:
yname = model.endog_names
yname_list = [yname]
smry.add_table_2cols(self, gleft=top_left, gright=[],
yname=yname, xname=exog_names, title=title)
# NOTE: add_table_params is not general enough yet for margeff
# could use a refactor with getattr instead of hard-coded params
# tvalues etc.
table = []
conf_int = self.conf_int(alpha)
margeff = self.margeff
margeff_se = self.margeff_se
tvalues = self.tvalues
pvalues = self.pvalues
if J > 1:
for eq in range(J):
restup = (results, margeff[:, eq], margeff_se[:, eq],
tvalues[:, eq], pvalues[:, eq], conf_int[:, :, eq])
tble = summary_params(restup, yname=yname_list[eq],
xname=exog_names, alpha=alpha,
use_t=False,
skip_header=True)
tble.title = yname_list[eq]
# overwrite coef with method name
header = ['', _transform_names[method], 'std err', 'z',
'P>|z|',
'[%3.1f%% Conf. Int.]' % (100 - alpha * 100)]
tble.insert_header_row(0, header)
# from IPython.core.debugger import Pdb; Pdb().set_trace()
table.append(tble)
table = table_extend(table, keep_headers=True)
else:
restup = (results, margeff, margeff_se, tvalues, pvalues, conf_int)
table = summary_params(restup, yname=yname, xname=exog_names,
alpha=alpha, use_t=False, skip_header=True)
header = ['', _transform_names[method], 'std err', 'z',
'P>|z|', '[%3.1f%% Conf. Int.]' % (100 - alpha * 100)]
table.insert_header_row(0, header)
smry.tables.append(table)
return smry
def get_margeff(self, at='overall', method='dydx', atexog=None,
dummy=False, count=False):
self._reset() # always reset the cache when this is called
# TODO: if at is not all or overall, we can also put atexog values
# in summary table head
method = method.lower()
at = at.lower()
_check_margeff_args(at, method)
self.margeff_options = dict(method=method, at=at)
results = self.results
model = results.model
params = results.params
exog = model.exog.copy() # copy because values are changed
effects_idx = exog.var(0) != 0
const_idx = model.data.const_idx
if dummy:
_check_discrete_args(at, method)
dummy_idx, dummy = _get_dummy_index(exog, const_idx)
else:
dummy_idx = None
if count:
_check_discrete_args(at, method)
count_idx, count = _get_count_index(exog, const_idx)
else:
count_idx = None
# get the exogenous variables
exog = _get_margeff_exog(exog, at, atexog, effects_idx)
# get base marginal effects, handled by sub-classes
effects = model._derivative_exog(params, exog, method,
dummy_idx, count_idx)
effects = _effects_at(effects, at)
if at == 'all':
self.margeff = effects[:, effects_idx]
else:
# Set standard error of the marginal effects by Delta method.
margeff_cov, margeff_se = margeff_cov_with_se(
model, params, exog, results.cov_params(), at,
model._derivative_exog, dummy_idx, count_idx,
method, 1)
# do not care about at constant
self.margeff_cov = margeff_cov[effects_idx][:, effects_idx]
self.margeff_se = margeff_se[effects_idx]
self.margeff = effects[effects_idx]
| bsd-3-clause |
hanase/synthpop | synthpop/synthesizer.py | 1 | 5123 | import logging
import sys
from collections import namedtuple
import numpy as np
import pandas as pd
from scipy.stats import chisquare
from . import categorizer as cat
from . import draw
from .ipf.ipf import calculate_constraints
from .ipu.ipu import household_weights
logger = logging.getLogger("synthpop")
FitQuality = namedtuple(
'FitQuality',
('people_chisq', 'people_p'))
BlockGroupID = namedtuple(
'BlockGroupID', ('state', 'county', 'tract', 'block_group'))
def enable_logging():
handler = logging.StreamHandler(stream=sys.stdout)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
def synthesize(h_marg, p_marg, h_jd, p_jd, h_pums, p_pums,
marginal_zero_sub=.01, jd_zero_sub=.001, hh_index_start=0):
# this is the zero marginal problem
h_marg = h_marg.replace(0, marginal_zero_sub)
p_marg = p_marg.replace(0, marginal_zero_sub)
# zero cell problem
h_jd.frequency = h_jd.frequency.replace(0, jd_zero_sub)
p_jd.frequency = p_jd.frequency.replace(0, jd_zero_sub)
# ipf for households
logger.info("Running ipf for households")
h_constraint, _ = calculate_constraints(h_marg, h_jd.frequency)
h_constraint.index = h_jd.cat_id
logger.debug("Household constraint")
logger.debug(h_constraint)
# ipf for persons
logger.info("Running ipf for persons")
p_constraint, _ = calculate_constraints(p_marg, p_jd.frequency)
p_constraint.index = p_jd.cat_id
logger.debug("Person constraint")
logger.debug(p_constraint)
# make frequency tables that the ipu expects
household_freq, person_freq = cat.frequency_tables(p_pums, h_pums,
p_jd.cat_id,
h_jd.cat_id)
# do the ipu to match person marginals
logger.info("Running ipu")
import time
t1 = time.time()
best_weights, fit_quality, iterations = household_weights(household_freq,
person_freq,
h_constraint,
p_constraint)
logger.info("Time to run ipu: %.3fs" % (time.time()-t1))
logger.debug("IPU weights:")
logger.debug(best_weights.describe())
logger.debug("Fit quality:")
logger.debug(fit_quality)
logger.debug("Number of iterations:")
logger.debug(iterations)
num_households = int(h_marg.groupby(level=0).sum().mean())
print "Drawing %d households" % num_households
best_chisq = np.inf
return draw.draw_households(
num_households, h_pums, p_pums, household_freq, h_constraint,
p_constraint, best_weights, hh_index_start=hh_index_start)
def synthesize_all(recipe, num_geogs=None, indexes=None,
marginal_zero_sub=.01, jd_zero_sub=.001):
"""
Returns
-------
households, people : pandas.DataFrame
fit_quality : dict of FitQuality
Keys are geographic IDs, values are namedtuples with attributes
``.household_chisq``, ``household_p``, ``people_chisq``,
and ``people_p``.
"""
print "Synthesizing at geog level: '{}' (number of geographies is {})".\
format(recipe.get_geography_name(), recipe.get_num_geographies())
if indexes is None:
indexes = recipe.get_available_geography_ids()
hh_list = []
people_list = []
cnt = 0
fit_quality = {}
hh_index_start = 0
# TODO will parallelization work here?
for geog_id in indexes:
print "Synthesizing geog id:\n", geog_id
h_marg = recipe.get_household_marginal_for_geography(geog_id)
logger.debug("Household marginal")
logger.debug(h_marg)
p_marg = recipe.get_person_marginal_for_geography(geog_id)
logger.debug("Person marginal")
logger.debug(p_marg)
h_pums, h_jd = recipe.\
get_household_joint_dist_for_geography(geog_id)
logger.debug("Household joint distribution")
logger.debug(h_jd)
p_pums, p_jd = recipe.get_person_joint_dist_for_geography(geog_id)
logger.debug("Person joint distribution")
logger.debug(p_jd)
households, people, people_chisq, people_p = \
synthesize(
h_marg, p_marg, h_jd, p_jd, h_pums, p_pums,
marginal_zero_sub=marginal_zero_sub, jd_zero_sub=jd_zero_sub,
hh_index_start=hh_index_start)
hh_list.append(households)
people_list.append(people)
key = BlockGroupID(
geog_id['state'], geog_id['county'], geog_id['tract'],
geog_id['block group'])
fit_quality[key] = FitQuality(people_chisq, people_p)
cnt += 1
if len(households) > 0:
hh_index_start = households.index.values[-1] + 1
if num_geogs is not None and cnt >= num_geogs:
break
# TODO might want to write this to disk as we go?
return (
pd.concat(hh_list), pd.concat(people_list, ignore_index=True),
fit_quality)
| bsd-3-clause |
0x0all/scikit-learn | sklearn/decomposition/base.py | 12 | 5524 | """Principal Component Analysis Base Classes"""
# Author: Alexandre Gramfort <[email protected]>
# Olivier Grisel <[email protected]>
# Mathieu Blondel <[email protected]>
# Denis A. Engemann <[email protected]>
# Kyle Kastner <[email protected]>
#
# License: BSD 3 clause
import numpy as np
from scipy import linalg
from ..base import BaseEstimator, TransformerMixin
from ..utils import check_array
from ..utils.extmath import fast_dot
from ..externals import six
from abc import ABCMeta, abstractmethod
class _BasePCA(six.with_metaclass(ABCMeta, BaseEstimator, TransformerMixin)):
"""Base class for PCA methods.
Warning: This class should not be used directly.
Use derived classes instead.
"""
def get_covariance(self):
"""Compute data covariance with the generative model.
``cov = components_.T * S**2 * components_ + sigma2 * eye(n_features)``
where S**2 contains the explained variances, and sigma2 contains the
noise variances.
Returns
-------
cov : array, shape=(n_features, n_features)
Estimated covariance of data.
"""
components_ = self.components_
exp_var = self.explained_variance_
if self.whiten:
components_ = components_ * np.sqrt(exp_var[:, np.newaxis])
exp_var_diff = np.maximum(exp_var - self.noise_variance_, 0.)
cov = np.dot(components_.T * exp_var_diff, components_)
cov.flat[::len(cov) + 1] += self.noise_variance_ # modify diag inplace
return cov
def get_precision(self):
"""Compute data precision matrix with the generative model.
Equals the inverse of the covariance but computed with
the matrix inversion lemma for efficiency.
Returns
-------
precision : array, shape=(n_features, n_features)
Estimated precision of data.
"""
n_features = self.components_.shape[1]
# handle corner cases first
if self.n_components_ == 0:
return np.eye(n_features) / self.noise_variance_
if self.n_components_ == n_features:
return linalg.inv(self.get_covariance())
# Get precision using matrix inversion lemma
components_ = self.components_
exp_var = self.explained_variance_
if self.whiten:
components_ = components_ * np.sqrt(exp_var[:, np.newaxis])
exp_var_diff = np.maximum(exp_var - self.noise_variance_, 0.)
precision = np.dot(components_, components_.T) / self.noise_variance_
precision.flat[::len(precision) + 1] += 1. / exp_var_diff
precision = np.dot(components_.T,
np.dot(linalg.inv(precision), components_))
precision /= -(self.noise_variance_ ** 2)
precision.flat[::len(precision) + 1] += 1. / self.noise_variance_
return precision
@abstractmethod
def fit(X, y=None):
"""Placeholder for fit. Subclasses should implement this method!
Fit the model with X.
Parameters
----------
X: array-like, shape (n_samples, n_features)
Training data, where n_samples is the number of samples and
n_features is the number of features.
Returns
-------
self: object
Returns the instance itself.
"""
def transform(self, X, y=None):
"""Apply dimensionality reduction to X.
X is projected on the first principal components previously extracted
from a training set.
Parameters
----------
X : array-like, shape (n_samples, n_features)
New data, where n_samples is the number of samples
and n_features is the number of features.
Returns
-------
X_new : array-like, shape (n_samples, n_components)
Examples
--------
>>> import numpy as np
>>> from sklearn.decomposition import IncrementalPCA
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> ipca = IncrementalPCA(n_components=2, batch_size=3)
>>> ipca.fit(X)
IncrementalPCA(batch_size=3, copy=True, n_components=2, whiten=False)
>>> ipca.transform(X) # doctest: +SKIP
"""
X = check_array(X)
if self.mean_ is not None:
X = X - self.mean_
X_transformed = fast_dot(X, self.components_.T)
if self.whiten:
X_transformed /= np.sqrt(self.explained_variance_)
return X_transformed
def inverse_transform(self, X, y=None):
"""Transform data back to its original space.
In other words, return an input X_original whose transform would be X.
Parameters
----------
X : array-like, shape (n_samples, n_components)
New data, where n_samples is the number of samples
and n_components is the number of components.
Returns
-------
X_original array-like, shape (n_samples, n_features)
Notes
-----
If whitening is enabled, inverse_transform will compute the
exact inverse operation, which includes reversing whitening.
"""
if self.whiten:
return fast_dot(X, np.sqrt(self.explained_variance_[:, np.newaxis]) *
self.components_) + self.mean_
else:
return fast_dot(X, self.components_) + self.mean_
| bsd-3-clause |
JosmanPS/scikit-learn | sklearn/decomposition/tests/test_truncated_svd.py | 240 | 6055 | """Test truncated SVD transformer."""
import numpy as np
import scipy.sparse as sp
from sklearn.decomposition import TruncatedSVD
from sklearn.utils import check_random_state
from sklearn.utils.testing import (assert_array_almost_equal, assert_equal,
assert_raises, assert_greater,
assert_array_less)
# Make an X that looks somewhat like a small tf-idf matrix.
# XXX newer versions of SciPy have scipy.sparse.rand for this.
shape = 60, 55
n_samples, n_features = shape
rng = check_random_state(42)
X = rng.randint(-100, 20, np.product(shape)).reshape(shape)
X = sp.csr_matrix(np.maximum(X, 0), dtype=np.float64)
X.data[:] = 1 + np.log(X.data)
Xdense = X.A
def test_algorithms():
svd_a = TruncatedSVD(30, algorithm="arpack")
svd_r = TruncatedSVD(30, algorithm="randomized", random_state=42)
Xa = svd_a.fit_transform(X)[:, :6]
Xr = svd_r.fit_transform(X)[:, :6]
assert_array_almost_equal(Xa, Xr)
comp_a = np.abs(svd_a.components_)
comp_r = np.abs(svd_r.components_)
# All elements are equal, but some elements are more equal than others.
assert_array_almost_equal(comp_a[:9], comp_r[:9])
assert_array_almost_equal(comp_a[9:], comp_r[9:], decimal=3)
def test_attributes():
for n_components in (10, 25, 41):
tsvd = TruncatedSVD(n_components).fit(X)
assert_equal(tsvd.n_components, n_components)
assert_equal(tsvd.components_.shape, (n_components, n_features))
def test_too_many_components():
for algorithm in ["arpack", "randomized"]:
for n_components in (n_features, n_features+1):
tsvd = TruncatedSVD(n_components=n_components, algorithm=algorithm)
assert_raises(ValueError, tsvd.fit, X)
def test_sparse_formats():
for fmt in ("array", "csr", "csc", "coo", "lil"):
Xfmt = Xdense if fmt == "dense" else getattr(X, "to" + fmt)()
tsvd = TruncatedSVD(n_components=11)
Xtrans = tsvd.fit_transform(Xfmt)
assert_equal(Xtrans.shape, (n_samples, 11))
Xtrans = tsvd.transform(Xfmt)
assert_equal(Xtrans.shape, (n_samples, 11))
def test_inverse_transform():
for algo in ("arpack", "randomized"):
# We need a lot of components for the reconstruction to be "almost
# equal" in all positions. XXX Test means or sums instead?
tsvd = TruncatedSVD(n_components=52, random_state=42)
Xt = tsvd.fit_transform(X)
Xinv = tsvd.inverse_transform(Xt)
assert_array_almost_equal(Xinv, Xdense, decimal=1)
def test_integers():
Xint = X.astype(np.int64)
tsvd = TruncatedSVD(n_components=6)
Xtrans = tsvd.fit_transform(Xint)
assert_equal(Xtrans.shape, (n_samples, tsvd.n_components))
def test_explained_variance():
# Test sparse data
svd_a_10_sp = TruncatedSVD(10, algorithm="arpack")
svd_r_10_sp = TruncatedSVD(10, algorithm="randomized", random_state=42)
svd_a_20_sp = TruncatedSVD(20, algorithm="arpack")
svd_r_20_sp = TruncatedSVD(20, algorithm="randomized", random_state=42)
X_trans_a_10_sp = svd_a_10_sp.fit_transform(X)
X_trans_r_10_sp = svd_r_10_sp.fit_transform(X)
X_trans_a_20_sp = svd_a_20_sp.fit_transform(X)
X_trans_r_20_sp = svd_r_20_sp.fit_transform(X)
# Test dense data
svd_a_10_de = TruncatedSVD(10, algorithm="arpack")
svd_r_10_de = TruncatedSVD(10, algorithm="randomized", random_state=42)
svd_a_20_de = TruncatedSVD(20, algorithm="arpack")
svd_r_20_de = TruncatedSVD(20, algorithm="randomized", random_state=42)
X_trans_a_10_de = svd_a_10_de.fit_transform(X.toarray())
X_trans_r_10_de = svd_r_10_de.fit_transform(X.toarray())
X_trans_a_20_de = svd_a_20_de.fit_transform(X.toarray())
X_trans_r_20_de = svd_r_20_de.fit_transform(X.toarray())
# helper arrays for tests below
svds = (svd_a_10_sp, svd_r_10_sp, svd_a_20_sp, svd_r_20_sp, svd_a_10_de,
svd_r_10_de, svd_a_20_de, svd_r_20_de)
svds_trans = (
(svd_a_10_sp, X_trans_a_10_sp),
(svd_r_10_sp, X_trans_r_10_sp),
(svd_a_20_sp, X_trans_a_20_sp),
(svd_r_20_sp, X_trans_r_20_sp),
(svd_a_10_de, X_trans_a_10_de),
(svd_r_10_de, X_trans_r_10_de),
(svd_a_20_de, X_trans_a_20_de),
(svd_r_20_de, X_trans_r_20_de),
)
svds_10_v_20 = (
(svd_a_10_sp, svd_a_20_sp),
(svd_r_10_sp, svd_r_20_sp),
(svd_a_10_de, svd_a_20_de),
(svd_r_10_de, svd_r_20_de),
)
svds_sparse_v_dense = (
(svd_a_10_sp, svd_a_10_de),
(svd_a_20_sp, svd_a_20_de),
(svd_r_10_sp, svd_r_10_de),
(svd_r_20_sp, svd_r_20_de),
)
# Assert the 1st component is equal
for svd_10, svd_20 in svds_10_v_20:
assert_array_almost_equal(
svd_10.explained_variance_ratio_,
svd_20.explained_variance_ratio_[:10],
decimal=5,
)
# Assert that 20 components has higher explained variance than 10
for svd_10, svd_20 in svds_10_v_20:
assert_greater(
svd_20.explained_variance_ratio_.sum(),
svd_10.explained_variance_ratio_.sum(),
)
# Assert that all the values are greater than 0
for svd in svds:
assert_array_less(0.0, svd.explained_variance_ratio_)
# Assert that total explained variance is less than 1
for svd in svds:
assert_array_less(svd.explained_variance_ratio_.sum(), 1.0)
# Compare sparse vs. dense
for svd_sparse, svd_dense in svds_sparse_v_dense:
assert_array_almost_equal(svd_sparse.explained_variance_ratio_,
svd_dense.explained_variance_ratio_)
# Test that explained_variance is correct
for svd, transformed in svds_trans:
total_variance = np.var(X.toarray(), axis=0).sum()
variances = np.var(transformed, axis=0)
true_explained_variance_ratio = variances / total_variance
assert_array_almost_equal(
svd.explained_variance_ratio_,
true_explained_variance_ratio,
)
| bsd-3-clause |
altermarkive/Resurrecting-JimFleming-Numerai | src/ml-jimfleming--numerai/notebooks/visualization.py | 1 | 3551 | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import print_function
from __future__ import division
import random
random.seed(67)
import numpy as np
np.random.seed(67)
import matplotlib
matplotlib.use('Agg')
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sompy import SOMFactory
from sklearn.cluster import DBSCAN
from sompy.visualization.mapview import View2D
from sompy.visualization.umatrix import UMatrixView
from sompy.visualization.histogram import Hist2d
import os
import sys
sns.set_style('white')
sns.set_context('notebook', font_scale=2)
df_train = pd.read_csv(os.getenv('PREPARED_TRAINING'))
df_valid = pd.read_csv(os.getenv('PREPARED_VALIDATING'))
df_test = pd.read_csv(os.getenv('PREPARED_TESTING'))
feature_cols = list(df_train.columns[:-1])
target_col = df_train.columns[-1]
X_train = df_train[feature_cols].values
y_train = df_train[target_col].values
X_valid = df_valid[feature_cols].values
y_valid = df_valid[target_col].values
# X_test = df_test[feature_cols].values
sm = SOMFactory.build(X_train, mapsize=[30, 30])
sm.train()
bmu = sm.find_bmu(X_valid)
print(bmu[1].shape)
print(sm.component_names)
xy = sm.bmu_ind_to_xy(bmu[0])
print(xy)
projection = sm.project_data(X_valid)
print(projection)
#sm.predict_by(X_train[:,:-1], y_train[:,-1:])
v = UMatrixView(8, 8, 'SOM', cmap='viridis')
v.show(sm)
plt.savefig(os.path.join(os.getenv('STORING'), 'figure8.png'))
v = View2D(8, 8, 'SOM', cmap='viridis')
v.prepare()
v.show(sm, col_sz=5, cmap='viridis')
plt.rcParams['axes.labelsize'] = 10
plt.savefig(os.path.join(os.getenv('STORING'), 'figure9.png'))
tsne_data = np.load(os.path.join(os.getenv('STORING'), 'tsne_2d_30p.npz'))
tsne_train = tsne_data['train']
tsne_valid = tsne_data['valid']
tsne_test = tsne_data['test']
tsne_all = np.concatenate([tsne_train, tsne_valid, tsne_test], axis=0)
dbscan = DBSCAN(eps=0.1, min_samples=5, metric='euclidean', algorithm='auto', leaf_size=30, p=None)
dbscan_all = dbscan.fit_predict(tsne_all)
fig = plt.figure(figsize=(24, 24))
ax = fig.add_subplot(111)
ax.scatter(tsne_all[:,0], tsne_all[:,1], c=dbscan_all, cmap='Set3', s=8, alpha=0.8, marker='.', lw=0)
fig.savefig(os.path.join(os.getenv('STORING'), 'figure10.png'))
if bool(int(os.getenv('TSNE_2D_ONLY', '0'))):
sys.exit()
tsne_data = np.load(os.path.join(os.getenv('STORING'), 'tsne_3d_30p.npz'))
tsne_train = tsne_data['train']
tsne_valid = tsne_data['valid']
tsne_test = tsne_data['test']
tsne_all = np.concatenate([tsne_train, tsne_valid, tsne_test], axis=0)
dbscan = DBSCAN(eps=0.1, min_samples=5, metric='euclidean', algorithm='auto', leaf_size=30, p=None)
dbscan_all = dbscan.fit_predict(tsne_all)
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(16, 16))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(tsne_all[:,0], tsne_all[:,1], tsne_all[:,2], c=dbscan_all, cmap='Set1', s=10, alpha=1.0, marker='.', lw=0, depthshade=True)
fig.savefig(os.path.join(os.getenv('STORING'), 'figure11.png'))
import plotly
import plotly.graph_objs as go
plotly.offline.init_notebook_mode()
idx = np.random.choice(len(tsne_all), 10000)
trace = go.Scatter3d(
x=tsne_all[idx,0],
y=tsne_all[idx,1],
z=tsne_all[idx,2],
mode='markers',
marker=dict(
size=4,
color=dbscan_all[idx],
colorscale='viridis',
opacity=0.8
)
)
plotly.offline.plot(go.Figure(data=[trace], layout=go.Layout(title='tsne-3d-scatter')), filename=os.path.join(os.getenv('STORING'), 'tsne-3d-scatter.html'))
| mit |
iancze/PSOAP | scripts/psoap_retrieve_ST3_draw.py | 1 | 2832 | #!/usr/bin/env python
import argparse
parser = argparse.ArgumentParser(description="Reconstruct the composite spectra for A and B component.")
parser.add_argument("--draws", type=int, default=0, help="In addition to plotting the mean GP, plot several draws of the GP to show the scatter in predicitions.")
args = parser.parse_args()
import numpy as np
import matplotlib.pyplot as plt
from astropy.io import ascii
from psoap import constants as C
from psoap import utils
import yaml
try:
f = open("config.yaml")
config = yaml.load(f)
f.close()
except FileNotFoundError as e:
print("You need to copy a config.yaml file to this directory, and then edit the values to your particular case.")
raise
# Load the list of chunks
chunks = ascii.read(config["chunk_file"])
def process_chunk(row):
print("processing", row)
order, wl0, wl1 = row
plots_dir = "plots_" + C.chunk_fmt.format(order, wl0, wl1)
mu = np.load(plots_dir + "/mu.npy")
Sigma = np.load(plots_dir + "/Sigma.npy")
n_pix_predict = int(len(mu)/3)
wls_A_predict, f = np.load(plots_dir + "/f.npy")
wls_B_predict, g = np.load(plots_dir + "/g.npy")
wls_C_predict, h = np.load(plots_dir + "/h.npy")
mu_f = mu[0:n_pix_predict]
mu_g = mu[n_pix_predict: 2 * n_pix_predict]
mu_h = mu[2 * n_pix_predict:]
# Make some multivariate draws
n_draws = args.draws
mu_draw = np.random.multivariate_normal(mu, Sigma, size=n_draws)
# Reshape the spectra right here
mu_draw_f = np.empty((n_draws, n_pix_predict))
mu_draw_g = np.empty((n_draws, n_pix_predict))
mu_draw_h = np.empty((n_draws, n_pix_predict))
for j in range(n_draws):
mu_draw_j = mu_draw[j]
mu_draw_f[j] = mu_draw_j[0:n_pix_predict]
mu_draw_g[j] = mu_draw_j[n_pix_predict: 2 * n_pix_predict]
mu_draw_h[j] = mu_draw_j[2 * n_pix_predict:]
np.save(plots_dir + "/f_draws.npy", mu_draw_f)
np.save(plots_dir + "/g_draws.npy", mu_draw_g)
np.save(plots_dir + "/h_draws.npy", mu_draw_h)
# Plot the draws
fig, ax = plt.subplots(nrows=3, sharex=True)
for j in range(n_draws):
ax[0].plot(wls_A_predict, mu_draw_f[j], color="0.2", lw=0.5)
ax[1].plot(wls_B_predict, mu_draw_g[j], color="0.2", lw=0.5)
ax[2].plot(wls_C_predict, mu_draw_h[j], color="0.2", lw=0.5)
ax[0].plot(wls_A_predict, mu_f, "b")
ax[0].set_ylabel(r"$f$")
ax[1].plot(wls_B_predict, mu_g, "g")
ax[1].set_ylabel(r"$g$")
ax[2].plot(wls_C_predict, mu_h, "r")
ax[2].set_ylabel(r"$h$")
ax[-1].set_xlabel(r"$\lambda\,[\AA]$")
fig.savefig(plots_dir + "/reconstructed.png", dpi=300)
plt.close("all")
# A laptop (e.g., mine) doesn't have enough memory to do this in parallel, so only serial for now
for chunk in chunks:
process_chunk(chunk)
| mit |
xguse/ggplot | ggplot/tests/test_geom_rect.py | 12 | 1316 | from __future__ import (absolute_import, division, print_function,
unicode_literals)
from nose.tools import assert_raises
from . import get_assert_same_ggplot, cleanup
assert_same_ggplot = get_assert_same_ggplot(__file__)
from ggplot import *
from ggplot.exampledata import diamonds
import pandas as pd
@cleanup
def test_geom_rect():
df = pd.DataFrame({
'xmin': [1,3,5],
'xmax': [2, 3.5, 7],
'ymin': [1, 4, 6],
'ymax': [5, 5, 9],
'fill': ['blue', 'red', 'green'],
'quality': ['good', 'bad', 'ugly'],
'alpha': [0.1, 0.5, 0.9],
'texture': ['hard', 'soft', 'medium']})
p = ggplot(df, aes(xmin='xmin', xmax='xmax', ymin='ymin', ymax='ymax',
colour='quality', fill='fill', alpha='alpha',
linetype='texture'))
p += geom_rect(size=5)
assert_same_ggplot(p, 'geom_rect')
p = ggplot(df, aes(x='xmin', y='ymin'))
p += geom_point(size=100, colour='red', alpha=0.5)
p += geom_rect(aes(fill='fill', xmin='xmin', xmax='xmax', ymin=0,
ymax='ymax'), alpha=0.1)
assert_same_ggplot(p, 'geom_rect_with_point')
def test_geom_rect_missing_req_aes():
with assert_raises(Exception):
print(ggplot(diamonds, aes(x=x, y=y)) + geom_point() + geom_rect())
| bsd-2-clause |
theroncarmichael/GC-CaT-Metallicitiy | interp.py | 1 | 9342 | #! /usr/bin/env python
'''
Created on Mar 17, 2011
@author: Chris Usher
'''
import numpy as np
#import matplotlib.pyplot as plt
import scipy.interpolate as interpolate
def redisperse(inputwavelengths, inputfluxes, firstWavelength=None, lastWavelength=None, dispersion=None, nPixels=None, outside=None, function='spline'):
inputedges = np.empty(inputwavelengths.size + 1)
inputedges[1:-1] = (inputwavelengths[1:] + inputwavelengths[:-1]) / 2
inputedges[0] = 3 * inputwavelengths[0] / 2 - inputwavelengths[1] / 2
inputedges[-1] = 3 * inputwavelengths[-1] / 2 - inputwavelengths[-2] / 2
inputdispersions = inputedges[1:] - inputedges[:-1]
epsilon = 1e-10
if dispersion == None and nPixels != None:
if firstWavelength == None:
firstWavelength = inputwavelengths[0]
if lastWavelength == None:
lastWavelength = inputwavelengths[-1]
outputwavelengths = np.linspace(firstWavelength, lastWavelength, nPixels)
elif dispersion != None and nPixels == None:
if firstWavelength == None:
firstWavelength = inputwavelengths[0]
if lastWavelength == None:
lastWavelength = inputwavelengths[-1]
outputwavelengths = np.arange(firstWavelength, lastWavelength + epsilon, dispersion)
elif dispersion != None and nPixels != None:
if firstWavelength != None:
outputwavelengths = firstWavelength + dispersion * np.ones(nPixels)
elif lastWavelength != None:
outputwavelengths = lastWavelength - dispersion * np.ones(nPixels)
outputwavelengths = outputwavelengths[::-1]
else:
outputwavelengths = inputwavelengths[0] + dispersion * np.ones(nPixels)
else:
dispersion = (inputwavelengths[-1] - inputwavelengths[0]) / (inputwavelengths.size - 1)
if lastWavelength == None:
lastWavelength = inputwavelengths[-1]
if firstWavelength != None:
outputwavelengths = np.arange(firstWavelength, lastWavelength + epsilon, dispersion)
else:
outputwavelengths = np.arange(inputwavelengths[0], lastWavelength + epsilon, dispersion)
outputdispersion = outputwavelengths[1] - outputwavelengths[0]
outputedges = np.linspace(outputwavelengths[0] - outputdispersion / 2, outputwavelengths[-1] + outputdispersion / 2, outputwavelengths.size + 1)
outputfluxes = interp(inputwavelengths, inputfluxes, inputedges, inputdispersions, outputwavelengths, outputedges, outside, function)
return (outputwavelengths, outputfluxes)
def rebin(inputwavelengths, inputfluxes, outputwavelengths, outside=None, function='spline', ratio=False):
inputedges = np.empty(inputwavelengths.size + 1)
inputedges[1:-1] = (inputwavelengths[1:] + inputwavelengths[:-1]) / 2
inputedges[0] = 3 * inputwavelengths[0] / 2 - inputwavelengths[1] / 2
inputedges[-1] = 3 * inputwavelengths[-1] / 2 - inputwavelengths[-2] / 2
inputdispersions = inputedges[1:] - inputedges[:-1]
outputedges = np.empty(outputwavelengths.size + 1)
outputedges[1:-1] = (outputwavelengths[1:] + outputwavelengths[:-1]) / 2
outputedges[0] = 3 * outputwavelengths[0] / 2 - outputwavelengths[1] / 2
outputedges[-1] = 3 * outputwavelengths[-1] / 2 - outputwavelengths[-2] / 2
return interp(inputwavelengths, inputfluxes, inputedges, inputdispersions, outputwavelengths, outputedges, outside, function, ratio)
def interp(inputwavelengths, inputfluxes, inputedges, inputdispersions, outputwavelengths, outputedges, outside=None, function='spline', ratio=False):
if not ratio:
fluxdensities = inputfluxes / inputdispersions.mean()
else:
fluxdensities = inputfluxes
outputfluxes = np.ones(outputwavelengths.size)
if outside != None:
outputfluxes = outputfluxes * outside
else:
middle = (outputwavelengths[0] + outputwavelengths[-1]) / 2
firstnew = None
lastnew = None
if function == 'nearest':
pixels = np.arange(0, inputfluxes.size)
for newpixel in range(outputfluxes.size):
if inputedges[0] <= outputwavelengths[newpixel] <= inputedges[-1]:
outputlowerlimit = outputedges[newpixel]
outputupperlimit = outputedges[newpixel + 1]
outputfluxes[newpixel] = 0
below = inputedges[1:] < outputlowerlimit
above = inputedges[:-1] > outputupperlimit
ok = ~(below | above)
for oldpixel in pixels[ok]:
inputlowerlimit = inputedges[oldpixel]
inputupperlimit = inputedges[oldpixel + 1]
if inputlowerlimit >= outputlowerlimit and inputupperlimit <= outputupperlimit:
outputfluxes[newpixel] += fluxdensities[oldpixel] * inputdispersions[oldpixel]
elif inputlowerlimit < outputlowerlimit and inputupperlimit > outputupperlimit:
outputfluxes[newpixel] += fluxdensities[oldpixel] * (outputupperlimit - outputlowerlimit)
elif inputlowerlimit < outputlowerlimit and outputlowerlimit <= inputupperlimit <= outputupperlimit:
outputfluxes[newpixel] += fluxdensities[oldpixel] * (inputupperlimit - outputlowerlimit)
elif outputupperlimit >= inputlowerlimit >= outputlowerlimit and inputupperlimit > outputupperlimit:
outputfluxes[newpixel] += fluxdensities[oldpixel] * (outputupperlimit - inputlowerlimit)
if firstnew == None:
firstnew = outputfluxes[newpixel]
if ratio:
outputfluxes[newpixel] = outputfluxes[newpixel] / (outputupperlimit - outputlowerlimit)
elif outputwavelengths[newpixel] > inputwavelengths[-1] and lastnew == None:
lastnew = outputfluxes[newpixel - 1]
else:
fluxspline = interpolate.UnivariateSpline(inputwavelengths, fluxdensities, s=0, k=3)
for newpixel in range(outputfluxes.size):
if inputedges[0] <= outputwavelengths[newpixel] <= inputedges[-1]:
outputlowerlimit = outputedges[newpixel]
outputupperlimit = outputedges[newpixel + 1]
outputfluxes[newpixel] = fluxspline.integral(outputedges[newpixel], outputedges[newpixel + 1])
if firstnew == None:
firstnew = outputfluxes[newpixel]
if ratio:
outputfluxes[newpixel] = outputfluxes[newpixel] / (outputupperlimit - outputlowerlimit)
elif outputwavelengths[newpixel] > inputwavelengths[-1] and lastnew == None:
lastnew = outputfluxes[newpixel - 1]
if outside == None:
for newpixel in range(outputfluxes.size):
if outputwavelengths[newpixel] < inputwavelengths[0]:
outputfluxes[newpixel] = firstnew
elif outputwavelengths[newpixel] > inputwavelengths[-1]:
outputfluxes[newpixel] = lastnew
return outputfluxes
def lineartolog(inputwavelengths, inputfluxes, outside=0, function='spline', ratio=False, logDispersion=0):
inputedges = np.empty(inputwavelengths.size + 1)
inputedges[1:-1] = (inputwavelengths[1:] + inputwavelengths[:-1]) / 2
inputedges[0] = 3 * inputwavelengths[0] / 2 - inputwavelengths[1] / 2
inputedges[-1] = 3 * inputwavelengths[-1] / 2 - inputwavelengths[-2] / 2
inputdispersions = inputedges[1:] - inputedges[:-1]
if logDispersion:
outputedges = np.arange(np.log10(inputedges[0]), np.log10(inputedges[-1]), logDispersion)
outputwavelengths = (outputedges[:-1] + outputedges[1:]) / 2
outputedges = 10**outputedges
outputwavelengths = 10**outputwavelengths
else:
outputedges = np.logspace(np.log10(inputedges[0]), np.log10(inputedges[-1]), inputedges.size)
outputwavelengths = (outputedges[:-1] * outputedges[1:])**.5
return outputwavelengths, interp(inputwavelengths, inputfluxes, inputedges, inputdispersions, outputwavelengths, outputedges, outside, function, ratio)
def logtolinear(inputwavelengths, inputfluxes, outside=0, function='spline', ratio=False):
logWavelengths = np.log10(inputwavelengths)
inputedges = np.empty(logWavelengths.size + 1)
inputedges[1:-1] = (logWavelengths[1:] + logWavelengths[:-1]) / 2
inputedges[0] = 3 * logWavelengths[0] / 2 - logWavelengths[1] / 2
inputedges[-1] = 3 * logWavelengths[-1] / 2 - logWavelengths[-2] / 2
inputedges = 10**inputedges
inputdispersions = inputedges[1:] - inputedges[:-1]
outputedges = np.linspace(inputedges[0], inputedges[-1], inputedges.size)
outputwavelengths = (outputedges[:-1] + outputedges[1:]) / 2
return outputwavelengths, interp(inputwavelengths, inputfluxes, inputedges, inputdispersions, outputwavelengths, outputedges, outside, function, ratio)
#plt.show()
| bsd-3-clause |
kmike/scikit-learn | examples/manifold/plot_swissroll.py | 4 | 1416 | """
===================================
Swiss Roll reduction with LLE
===================================
An illustration of Swiss Roll reduction
with locally linear embedding
"""
# Author: Fabian Pedregosa -- <[email protected]>
# License: BSD, (C) INRIA 2011
print(__doc__)
import pylab as pl
# This import is needed to modify the way figure behaves
from mpl_toolkits.mplot3d import Axes3D
Axes3D
#----------------------------------------------------------------------
# Locally linear embedding of the swiss roll
from sklearn import manifold, datasets
X, color = datasets.samples_generator.make_swiss_roll(n_samples=1500)
print("Computing LLE embedding")
X_r, err = manifold.locally_linear_embedding(X, n_neighbors=12,
n_components=2)
print("Done. Reconstruction error: %g" % err)
#----------------------------------------------------------------------
# Plot result
fig = pl.figure()
try:
# compatibility matplotlib < 1.0
ax = fig.add_subplot(211, projection='3d')
ax.scatter(X[:, 0], X[:, 1], X[:, 2], c=color, cmap=pl.cm.Spectral)
except:
ax = fig.add_subplot(211)
ax.scatter(X[:, 0], X[:, 2], c=color, cmap=pl.cm.Spectral)
ax.set_title("Original data")
ax = fig.add_subplot(212)
ax.scatter(X_r[:, 0], X_r[:, 1], c=color, cmap=pl.cm.Spectral)
pl.axis('tight')
pl.xticks([]), pl.yticks([])
pl.title('Projected data')
pl.show()
| bsd-3-clause |
sumspr/scikit-learn | sklearn/feature_extraction/dict_vectorizer.py | 234 | 12267 | # Authors: Lars Buitinck
# Dan Blanchard <[email protected]>
# License: BSD 3 clause
from array import array
from collections import Mapping
from operator import itemgetter
import numpy as np
import scipy.sparse as sp
from ..base import BaseEstimator, TransformerMixin
from ..externals import six
from ..externals.six.moves import xrange
from ..utils import check_array, tosequence
from ..utils.fixes import frombuffer_empty
def _tosequence(X):
"""Turn X into a sequence or ndarray, avoiding a copy if possible."""
if isinstance(X, Mapping): # single sample
return [X]
else:
return tosequence(X)
class DictVectorizer(BaseEstimator, TransformerMixin):
"""Transforms lists of feature-value mappings to vectors.
This transformer turns lists of mappings (dict-like objects) of feature
names to feature values into Numpy arrays or scipy.sparse matrices for use
with scikit-learn estimators.
When feature values are strings, this transformer will do a binary one-hot
(aka one-of-K) coding: one boolean-valued feature is constructed for each
of the possible string values that the feature can take on. For instance,
a feature "f" that can take on the values "ham" and "spam" will become two
features in the output, one signifying "f=ham", the other "f=spam".
Features that do not occur in a sample (mapping) will have a zero value
in the resulting array/matrix.
Read more in the :ref:`User Guide <dict_feature_extraction>`.
Parameters
----------
dtype : callable, optional
The type of feature values. Passed to Numpy array/scipy.sparse matrix
constructors as the dtype argument.
separator: string, optional
Separator string used when constructing new features for one-hot
coding.
sparse: boolean, optional.
Whether transform should produce scipy.sparse matrices.
True by default.
sort: boolean, optional.
Whether ``feature_names_`` and ``vocabulary_`` should be sorted when fitting.
True by default.
Attributes
----------
vocabulary_ : dict
A dictionary mapping feature names to feature indices.
feature_names_ : list
A list of length n_features containing the feature names (e.g., "f=ham"
and "f=spam").
Examples
--------
>>> from sklearn.feature_extraction import DictVectorizer
>>> v = DictVectorizer(sparse=False)
>>> D = [{'foo': 1, 'bar': 2}, {'foo': 3, 'baz': 1}]
>>> X = v.fit_transform(D)
>>> X
array([[ 2., 0., 1.],
[ 0., 1., 3.]])
>>> v.inverse_transform(X) == \
[{'bar': 2.0, 'foo': 1.0}, {'baz': 1.0, 'foo': 3.0}]
True
>>> v.transform({'foo': 4, 'unseen_feature': 3})
array([[ 0., 0., 4.]])
See also
--------
FeatureHasher : performs vectorization using only a hash function.
sklearn.preprocessing.OneHotEncoder : handles nominal/categorical features
encoded as columns of integers.
"""
def __init__(self, dtype=np.float64, separator="=", sparse=True,
sort=True):
self.dtype = dtype
self.separator = separator
self.sparse = sparse
self.sort = sort
def fit(self, X, y=None):
"""Learn a list of feature name -> indices mappings.
Parameters
----------
X : Mapping or iterable over Mappings
Dict(s) or Mapping(s) from feature names (arbitrary Python
objects) to feature values (strings or convertible to dtype).
y : (ignored)
Returns
-------
self
"""
feature_names = []
vocab = {}
for x in X:
for f, v in six.iteritems(x):
if isinstance(v, six.string_types):
f = "%s%s%s" % (f, self.separator, v)
if f not in vocab:
feature_names.append(f)
vocab[f] = len(vocab)
if self.sort:
feature_names.sort()
vocab = dict((f, i) for i, f in enumerate(feature_names))
self.feature_names_ = feature_names
self.vocabulary_ = vocab
return self
def _transform(self, X, fitting):
# Sanity check: Python's array has no way of explicitly requesting the
# signed 32-bit integers that scipy.sparse needs, so we use the next
# best thing: typecode "i" (int). However, if that gives larger or
# smaller integers than 32-bit ones, np.frombuffer screws up.
assert array("i").itemsize == 4, (
"sizeof(int) != 4 on your platform; please report this at"
" https://github.com/scikit-learn/scikit-learn/issues and"
" include the output from platform.platform() in your bug report")
dtype = self.dtype
if fitting:
feature_names = []
vocab = {}
else:
feature_names = self.feature_names_
vocab = self.vocabulary_
# Process everything as sparse regardless of setting
X = [X] if isinstance(X, Mapping) else X
indices = array("i")
indptr = array("i", [0])
# XXX we could change values to an array.array as well, but it
# would require (heuristic) conversion of dtype to typecode...
values = []
# collect all the possible feature names and build sparse matrix at
# same time
for x in X:
for f, v in six.iteritems(x):
if isinstance(v, six.string_types):
f = "%s%s%s" % (f, self.separator, v)
v = 1
if f in vocab:
indices.append(vocab[f])
values.append(dtype(v))
else:
if fitting:
feature_names.append(f)
vocab[f] = len(vocab)
indices.append(vocab[f])
values.append(dtype(v))
indptr.append(len(indices))
if len(indptr) == 1:
raise ValueError("Sample sequence X is empty.")
indices = frombuffer_empty(indices, dtype=np.intc)
indptr = np.frombuffer(indptr, dtype=np.intc)
shape = (len(indptr) - 1, len(vocab))
result_matrix = sp.csr_matrix((values, indices, indptr),
shape=shape, dtype=dtype)
# Sort everything if asked
if fitting and self.sort:
feature_names.sort()
map_index = np.empty(len(feature_names), dtype=np.int32)
for new_val, f in enumerate(feature_names):
map_index[new_val] = vocab[f]
vocab[f] = new_val
result_matrix = result_matrix[:, map_index]
if self.sparse:
result_matrix.sort_indices()
else:
result_matrix = result_matrix.toarray()
if fitting:
self.feature_names_ = feature_names
self.vocabulary_ = vocab
return result_matrix
def fit_transform(self, X, y=None):
"""Learn a list of feature name -> indices mappings and transform X.
Like fit(X) followed by transform(X), but does not require
materializing X in memory.
Parameters
----------
X : Mapping or iterable over Mappings
Dict(s) or Mapping(s) from feature names (arbitrary Python
objects) to feature values (strings or convertible to dtype).
y : (ignored)
Returns
-------
Xa : {array, sparse matrix}
Feature vectors; always 2-d.
"""
return self._transform(X, fitting=True)
def inverse_transform(self, X, dict_type=dict):
"""Transform array or sparse matrix X back to feature mappings.
X must have been produced by this DictVectorizer's transform or
fit_transform method; it may only have passed through transformers
that preserve the number of features and their order.
In the case of one-hot/one-of-K coding, the constructed feature
names and values are returned rather than the original ones.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Sample matrix.
dict_type : callable, optional
Constructor for feature mappings. Must conform to the
collections.Mapping API.
Returns
-------
D : list of dict_type objects, length = n_samples
Feature mappings for the samples in X.
"""
# COO matrix is not subscriptable
X = check_array(X, accept_sparse=['csr', 'csc'])
n_samples = X.shape[0]
names = self.feature_names_
dicts = [dict_type() for _ in xrange(n_samples)]
if sp.issparse(X):
for i, j in zip(*X.nonzero()):
dicts[i][names[j]] = X[i, j]
else:
for i, d in enumerate(dicts):
for j, v in enumerate(X[i, :]):
if v != 0:
d[names[j]] = X[i, j]
return dicts
def transform(self, X, y=None):
"""Transform feature->value dicts to array or sparse matrix.
Named features not encountered during fit or fit_transform will be
silently ignored.
Parameters
----------
X : Mapping or iterable over Mappings, length = n_samples
Dict(s) or Mapping(s) from feature names (arbitrary Python
objects) to feature values (strings or convertible to dtype).
y : (ignored)
Returns
-------
Xa : {array, sparse matrix}
Feature vectors; always 2-d.
"""
if self.sparse:
return self._transform(X, fitting=False)
else:
dtype = self.dtype
vocab = self.vocabulary_
X = _tosequence(X)
Xa = np.zeros((len(X), len(vocab)), dtype=dtype)
for i, x in enumerate(X):
for f, v in six.iteritems(x):
if isinstance(v, six.string_types):
f = "%s%s%s" % (f, self.separator, v)
v = 1
try:
Xa[i, vocab[f]] = dtype(v)
except KeyError:
pass
return Xa
def get_feature_names(self):
"""Returns a list of feature names, ordered by their indices.
If one-of-K coding is applied to categorical features, this will
include the constructed feature names but not the original ones.
"""
return self.feature_names_
def restrict(self, support, indices=False):
"""Restrict the features to those in support using feature selection.
This function modifies the estimator in-place.
Parameters
----------
support : array-like
Boolean mask or list of indices (as returned by the get_support
member of feature selectors).
indices : boolean, optional
Whether support is a list of indices.
Returns
-------
self
Examples
--------
>>> from sklearn.feature_extraction import DictVectorizer
>>> from sklearn.feature_selection import SelectKBest, chi2
>>> v = DictVectorizer()
>>> D = [{'foo': 1, 'bar': 2}, {'foo': 3, 'baz': 1}]
>>> X = v.fit_transform(D)
>>> support = SelectKBest(chi2, k=2).fit(X, [0, 1])
>>> v.get_feature_names()
['bar', 'baz', 'foo']
>>> v.restrict(support.get_support()) # doctest: +ELLIPSIS
DictVectorizer(dtype=..., separator='=', sort=True,
sparse=True)
>>> v.get_feature_names()
['bar', 'foo']
"""
if not indices:
support = np.where(support)[0]
names = self.feature_names_
new_vocab = {}
for i in support:
new_vocab[names[i]] = len(new_vocab)
self.vocabulary_ = new_vocab
self.feature_names_ = [f for f, i in sorted(six.iteritems(new_vocab),
key=itemgetter(1))]
return self
| bsd-3-clause |
ContinuumIO/dask | dask/bytes/tests/test_s3.py | 1 | 14943 | import io
import os
from contextlib import contextmanager
from functools import partial
from distutils.version import LooseVersion
import pytest
import numpy as np
s3fs = pytest.importorskip("s3fs")
boto3 = pytest.importorskip("boto3")
moto = pytest.importorskip("moto")
httpretty = pytest.importorskip("httpretty")
from tlz import concat, valmap
from dask import compute
from dask.bytes.core import read_bytes, open_files
from s3fs import S3FileSystem as DaskS3FileSystem
from dask.bytes.utils import compress
from fsspec.compression import compr
compute = partial(compute, scheduler="sync")
test_bucket_name = "test"
files = {
"test/accounts.1.json": (
b'{"amount": 100, "name": "Alice"}\n'
b'{"amount": 200, "name": "Bob"}\n'
b'{"amount": 300, "name": "Charlie"}\n'
b'{"amount": 400, "name": "Dennis"}\n'
),
"test/accounts.2.json": (
b'{"amount": 500, "name": "Alice"}\n'
b'{"amount": 600, "name": "Bob"}\n'
b'{"amount": 700, "name": "Charlie"}\n'
b'{"amount": 800, "name": "Dennis"}\n'
),
}
@contextmanager
def ensure_safe_environment_variables():
"""
Get a context manager to safely set environment variables
All changes will be undone on close, hence environment variables set
within this contextmanager will neither persist nor change global state.
"""
saved_environ = dict(os.environ)
try:
yield
finally:
os.environ.clear()
os.environ.update(saved_environ)
@pytest.fixture()
def s3():
with ensure_safe_environment_variables():
# temporary workaround as moto fails for botocore >= 1.11 otherwise,
# see https://github.com/spulec/moto/issues/1924 & 1952
os.environ.setdefault("AWS_ACCESS_KEY_ID", "foobar_key")
os.environ.setdefault("AWS_SECRET_ACCESS_KEY", "foobar_secret")
# writable local S3 system
import moto
with moto.mock_s3():
client = boto3.client("s3")
client.create_bucket(Bucket=test_bucket_name, ACL="public-read-write")
for f, data in files.items():
client.put_object(Bucket=test_bucket_name, Key=f, Body=data)
fs = s3fs.S3FileSystem(anon=True)
fs.invalidate_cache
yield fs
httpretty.HTTPretty.disable()
httpretty.HTTPretty.reset()
@contextmanager
def s3_context(bucket, files):
with ensure_safe_environment_variables():
# temporary workaround as moto fails for botocore >= 1.11 otherwise,
# see https://github.com/spulec/moto/issues/1924 & 1952
os.environ.setdefault("AWS_ACCESS_KEY_ID", "foobar_key")
os.environ.setdefault("AWS_SECRET_ACCESS_KEY", "foobar_secret")
with moto.mock_s3():
client = boto3.client("s3")
client.create_bucket(Bucket=bucket, ACL="public-read-write")
for f, data in files.items():
client.put_object(Bucket=bucket, Key=f, Body=data)
fs = DaskS3FileSystem(anon=True)
fs.invalidate_cache()
yield fs
for f, data in files.items():
try:
client.delete_object(Bucket=bucket, Key=f, Body=data)
except Exception:
pass
finally:
httpretty.HTTPretty.disable()
httpretty.HTTPretty.reset()
@pytest.fixture()
@pytest.mark.slow
def s3_with_yellow_tripdata(s3):
"""
Fixture with sample yellowtrip CSVs loaded into S3.
Provides the following CSVs:
* s3://test/nyc-taxi/2015/yellow_tripdata_2015-01.csv
* s3://test/nyc-taxi/2014/yellow_tripdata_2015-mm.csv
for mm from 01 - 12.
"""
pd = pytest.importorskip("pandas")
data = {
"VendorID": {0: 2, 1: 1, 2: 1, 3: 1, 4: 1},
"tpep_pickup_datetime": {
0: "2015-01-15 19:05:39",
1: "2015-01-10 20:33:38",
2: "2015-01-10 20:33:38",
3: "2015-01-10 20:33:39",
4: "2015-01-10 20:33:39",
},
"tpep_dropoff_datetime": {
0: "2015-01-15 19:23:42",
1: "2015-01-10 20:53:28",
2: "2015-01-10 20:43:41",
3: "2015-01-10 20:35:31",
4: "2015-01-10 20:52:58",
},
"passenger_count": {0: 1, 1: 1, 2: 1, 3: 1, 4: 1},
"trip_distance": {0: 1.59, 1: 3.3, 2: 1.8, 3: 0.5, 4: 3.0},
"pickup_longitude": {
0: -73.993896484375,
1: -74.00164794921875,
2: -73.96334075927734,
3: -74.00908660888672,
4: -73.97117614746094,
},
"pickup_latitude": {
0: 40.7501106262207,
1: 40.7242431640625,
2: 40.80278778076172,
3: 40.71381759643555,
4: 40.762428283691406,
},
"RateCodeID": {0: 1, 1: 1, 2: 1, 3: 1, 4: 1},
"store_and_fwd_flag": {0: "N", 1: "N", 2: "N", 3: "N", 4: "N"},
"dropoff_longitude": {
0: -73.97478485107422,
1: -73.99441528320312,
2: -73.95182037353516,
3: -74.00432586669923,
4: -74.00418090820312,
},
"dropoff_latitude": {
0: 40.75061798095703,
1: 40.75910949707031,
2: 40.82441329956055,
3: 40.71998596191406,
4: 40.742652893066406,
},
"payment_type": {0: 1, 1: 1, 2: 2, 3: 2, 4: 2},
"fare_amount": {0: 12.0, 1: 14.5, 2: 9.5, 3: 3.5, 4: 15.0},
"extra": {0: 1.0, 1: 0.5, 2: 0.5, 3: 0.5, 4: 0.5},
"mta_tax": {0: 0.5, 1: 0.5, 2: 0.5, 3: 0.5, 4: 0.5},
"tip_amount": {0: 3.25, 1: 2.0, 2: 0.0, 3: 0.0, 4: 0.0},
"tolls_amount": {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0},
"improvement_surcharge": {0: 0.3, 1: 0.3, 2: 0.3, 3: 0.3, 4: 0.3},
"total_amount": {0: 17.05, 1: 17.8, 2: 10.8, 3: 4.8, 4: 16.3},
}
sample = pd.DataFrame(data)
df = sample.take(np.arange(5).repeat(10000))
file = io.BytesIO()
sfile = io.TextIOWrapper(file)
df.to_csv(sfile, index=False)
key = "nyc-taxi/2015/yellow_tripdata_2015-01.csv"
client = boto3.client("s3")
client.put_object(Bucket=test_bucket_name, Key=key, Body=file)
key = "nyc-taxi/2014/yellow_tripdata_2014-{:0>2d}.csv"
for i in range(1, 13):
file.seek(0)
client.put_object(Bucket=test_bucket_name, Key=key.format(i), Body=file)
yield
def test_get_s3():
s3 = DaskS3FileSystem(key="key", secret="secret")
assert s3.key == "key"
assert s3.secret == "secret"
s3 = DaskS3FileSystem(username="key", password="secret")
assert s3.key == "key"
assert s3.secret == "secret"
with pytest.raises(KeyError):
DaskS3FileSystem(key="key", username="key")
with pytest.raises(KeyError):
DaskS3FileSystem(secret="key", password="key")
def test_open_files_write(s3):
paths = ["s3://" + test_bucket_name + "/more/" + f for f in files]
fils = open_files(paths, mode="wb")
for fil, data in zip(fils, files.values()):
with fil as f:
f.write(data)
sample, values = read_bytes("s3://" + test_bucket_name + "/more/test/accounts.*")
results = compute(*concat(values))
assert set(list(files.values())) == set(results)
def test_read_bytes(s3):
sample, values = read_bytes("s3://" + test_bucket_name + "/test/accounts.*")
assert isinstance(sample, bytes)
assert sample[:5] == files[sorted(files)[0]][:5]
assert sample.endswith(b"\n")
assert isinstance(values, (list, tuple))
assert isinstance(values[0], (list, tuple))
assert hasattr(values[0][0], "dask")
assert sum(map(len, values)) >= len(files)
results = compute(*concat(values))
assert set(results) == set(files.values())
def test_read_bytes_sample_delimiter(s3):
sample, values = read_bytes(
"s3://" + test_bucket_name + "/test/accounts.*", sample=80, delimiter=b"\n"
)
assert sample.endswith(b"\n")
sample, values = read_bytes(
"s3://" + test_bucket_name + "/test/accounts.1.json", sample=80, delimiter=b"\n"
)
assert sample.endswith(b"\n")
sample, values = read_bytes(
"s3://" + test_bucket_name + "/test/accounts.1.json", sample=2, delimiter=b"\n"
)
assert sample.endswith(b"\n")
def test_read_bytes_non_existing_glob(s3):
with pytest.raises(IOError):
read_bytes("s3://" + test_bucket_name + "/non-existing/*")
def test_read_bytes_blocksize_none(s3):
_, values = read_bytes(
"s3://" + test_bucket_name + "/test/accounts.*", blocksize=None
)
assert sum(map(len, values)) == len(files)
def test_read_bytes_blocksize_on_large_data(s3_with_yellow_tripdata):
_, L = read_bytes(
"s3://{}/nyc-taxi/2015/yellow_tripdata_2015-01.csv".format(test_bucket_name),
blocksize=None,
anon=True,
)
assert len(L) == 1
_, L = read_bytes(
"s3://{}/nyc-taxi/2014/*.csv".format(test_bucket_name),
blocksize=None,
anon=True,
)
assert len(L) == 12
@pytest.mark.parametrize("blocksize", [5, 15, 45, 1500])
def test_read_bytes_block(s3, blocksize):
_, vals = read_bytes(
"s3://" + test_bucket_name + "/test/account*", blocksize=blocksize
)
assert list(map(len, vals)) == [(len(v) // blocksize + 1) for v in files.values()]
results = compute(*concat(vals))
assert sum(len(r) for r in results) == sum(len(v) for v in files.values())
ourlines = b"".join(results).split(b"\n")
testlines = b"".join(files.values()).split(b"\n")
assert set(ourlines) == set(testlines)
@pytest.mark.parametrize("blocksize", [5, 15, 45, 1500])
def test_read_bytes_delimited(s3, blocksize):
_, values = read_bytes(
"s3://" + test_bucket_name + "/test/accounts*",
blocksize=blocksize,
delimiter=b"\n",
)
_, values2 = read_bytes(
"s3://" + test_bucket_name + "/test/accounts*",
blocksize=blocksize,
delimiter=b"foo",
)
assert [a.key for a in concat(values)] != [b.key for b in concat(values2)]
results = compute(*concat(values))
res = [r for r in results if r]
assert all(r.endswith(b"\n") for r in res)
ourlines = b"".join(res).split(b"\n")
testlines = b"".join(files[k] for k in sorted(files)).split(b"\n")
assert ourlines == testlines
# delimiter not at the end
d = b"}"
_, values = read_bytes(
"s3://" + test_bucket_name + "/test/accounts*", blocksize=blocksize, delimiter=d
)
results = compute(*concat(values))
res = [r for r in results if r]
# All should end in } except EOF
assert sum(r.endswith(b"}") for r in res) == len(res) - 2
ours = b"".join(res)
test = b"".join(files[v] for v in sorted(files))
assert ours == test
@pytest.mark.parametrize(
"fmt,blocksize", [(fmt, None) for fmt in compr] + [(fmt, 10) for fmt in compr]
)
def test_compression(s3, fmt, blocksize):
if fmt not in compress:
pytest.skip("compression function not provided")
s3._cache.clear()
with s3_context("compress", valmap(compress[fmt], files)):
if fmt and blocksize:
with pytest.raises(ValueError):
read_bytes(
"s3://compress/test/accounts.*",
compression=fmt,
blocksize=blocksize,
)
return
sample, values = read_bytes(
"s3://compress/test/accounts.*", compression=fmt, blocksize=blocksize
)
assert sample.startswith(files[sorted(files)[0]][:10])
assert sample.endswith(b"\n")
results = compute(*concat(values))
assert b"".join(results) == b"".join([files[k] for k in sorted(files)])
@pytest.mark.parametrize("mode", ["rt", "rb"])
def test_open_files(s3, mode):
myfiles = open_files("s3://" + test_bucket_name + "/test/accounts.*", mode=mode)
assert len(myfiles) == len(files)
for lazy_file, path in zip(myfiles, sorted(files)):
with lazy_file as f:
data = f.read()
sol = files[path]
assert data == sol if mode == "rb" else sol.decode()
double = lambda x: x * 2
def test_modification_time_read_bytes():
with s3_context("compress", files):
_, a = read_bytes("s3://compress/test/accounts.*", anon=True)
_, b = read_bytes("s3://compress/test/accounts.*", anon=True)
assert [aa._key for aa in concat(a)] == [bb._key for bb in concat(b)]
with s3_context("compress", valmap(double, files)):
_, c = read_bytes("s3://compress/test/accounts.*", anon=True)
assert [aa._key for aa in concat(a)] != [cc._key for cc in concat(c)]
@pytest.mark.parametrize("engine", ["pyarrow", "fastparquet"])
def test_parquet(s3, engine):
dd = pytest.importorskip("dask.dataframe")
from dask.dataframe._compat import tm
lib = pytest.importorskip(engine)
if engine == "pyarrow" and LooseVersion(lib.__version__) < "0.13.1":
pytest.skip("pyarrow < 0.13.1 not supported for parquet")
import pandas as pd
import numpy as np
url = "s3://%s/test.parquet" % test_bucket_name
data = pd.DataFrame(
{
"i32": np.arange(1000, dtype=np.int32),
"i64": np.arange(1000, dtype=np.int64),
"f": np.arange(1000, dtype=np.float64),
"bhello": np.random.choice([u"hello", u"you", u"people"], size=1000).astype(
"O"
),
},
index=pd.Index(np.arange(1000), name="foo"),
)
df = dd.from_pandas(data, chunksize=500)
df.to_parquet(url, engine=engine)
files = [f.split("/")[-1] for f in s3.ls(url)]
assert "_common_metadata" in files
assert "part.0.parquet" in files
df2 = dd.read_parquet(url, index="foo", engine=engine)
assert len(df2.divisions) > 1
tm.assert_frame_equal(data, df2.compute())
def test_parquet_wstoragepars(s3):
dd = pytest.importorskip("dask.dataframe")
pytest.importorskip("fastparquet")
import pandas as pd
import numpy as np
url = "s3://%s/test.parquet" % test_bucket_name
data = pd.DataFrame({"i32": np.array([0, 5, 2, 5])})
df = dd.from_pandas(data, chunksize=500)
df.to_parquet(url, write_index=False)
dd.read_parquet(url, storage_options={"default_fill_cache": False})
assert s3.current().default_fill_cache is False
dd.read_parquet(url, storage_options={"default_fill_cache": True})
assert s3.current().default_fill_cache is True
dd.read_parquet(url, storage_options={"default_block_size": 2 ** 20})
assert s3.current().default_block_size == 2 ** 20
with s3.current().open(url + "/_metadata") as f:
assert f.blocksize == 2 ** 20
def test_get_pyarrow_fs_s3(s3):
pa = pytest.importorskip("pyarrow")
fs = DaskS3FileSystem(anon=True)
assert isinstance(fs, pa.filesystem.FileSystem)
| bsd-3-clause |
ULHPC/tutorials | python/advanced/scikit-learn/scripts/supervized/main.py | 2 | 3092 | import argparse
import logging
import os
import sys
import matplotlib as mpl
mpl.use('Agg')
import matplotlib.pyplot as plt
from matplotlib.colors import Normalize
from joblib import Parallel, parallel_backend
from joblib import register_parallel_backend
from joblib import delayed
from joblib import cpu_count
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from ipyparallel import Client
from ipyparallel.joblib import IPythonParallelBackend
import numpy as np
import pandas as pd
import datetime
from sklearn.model_selection import GridSearchCV
FILE_DIR = os.path.dirname(os.path.abspath(__file__))
sys.path.append(FILE_DIR)
#prepare the logger
parser = argparse.ArgumentParser()
parser.add_argument("-p", "--profile", default="ipy_profile",
help="Name of IPython profile to use")
args = parser.parse_args()
profile = args.profile
logging.basicConfig(filename=os.path.join(FILE_DIR,profile+'.log'),
filemode='w',
level=logging.DEBUG)
logging.info("number of CPUs found: {0}".format(cpu_count()))
logging.info("args.profile: {0}".format(profile))
#prepare the engines
c = Client(profile=profile)
#The following command will make sure that each engine is running in
# the right working directory to access the custom function(s).
c[:].map(os.chdir, [FILE_DIR]*len(c))
logging.info("c.ids :{0}".format(str(c.ids)))
bview = c.load_balanced_view()
register_parallel_backend('ipyparallel',
lambda : IPythonParallelBackend(view=bview))
#Get data
digits = load_digits()
#prepare it for the custom function
#it would be better to use cross-validation
#outside the scope of this tutorial
X_train, X_test, y_train, y_test = train_test_split(digits.data,
digits.target,
test_size=0.3)
#some parameters to test in parallel
param_space = {
'C': np.logspace(-6, 6, 20),
'gamma': np.logspace(-6,1,20)
}
svc_rbf = SVC(kernel='rbf',
shrinking=False)
search = GridSearchCV(svc_rbf,
param_space,
return_train_score=True,
n_jobs=len(c))
with parallel_backend('ipyparallel'):
search.fit(X_train, y_train)
results = search.cv_results_
results = pd.DataFrame(results)
results.to_csv(os.path.join(FILE_DIR,'scores_rbf_digits.csv'))
scores = search.cv_results_['mean_test_score'].reshape(len(param_space['C']),len(param_space['gamma']))
plt.figure()
#plt.subplots_adjust(left=.2, right=0.95, bottom=0.15, top=0.95)
plt.imshow(scores, interpolation='nearest', cmap=plt.cm.hot)
plt.xlabel('gamma')
plt.ylabel('C')
plt.colorbar()
plt.xticks(np.arange(len(param_space['gamma'])), map(lambda x : "%.2E"%(x),param_space['gamma']), fontsize=8, rotation=45)
plt.yticks(np.arange(len(param_space['C'])), map(lambda x : "%.2E"%(x),param_space['C']), fontsize=8, rotation=45)
plt.title('Validation accuracy')
plt.savefig(os.path.join(FILE_DIR,"validation.png"))
| gpl-3.0 |
tmoer/multimodal_varinf | rlutils/helpers.py | 1 | 9855 | # -*- coding: utf-8 -*-
"""
Grid-world environment
@author: thomas
"""
import sys
import os.path
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
sys.path.append(os.path.abspath(os.path.dirname(__file__)))
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from rlenv.grid import grid_env as grid_env
from rlutils.policies import egreedy
from scipy.linalg import norm
### other stuff
def make_rl_data(Env,batch_size):
world = np.zeros([7,7],dtype='int32')
world[1:6,1] = 1
world[1:3,4] = 1
world[4:6,4] = 1
s_data = np.zeros([batch_size,Env.observation_shape],dtype='int32')
s1_data = np.zeros([batch_size,Env.observation_shape],dtype='int32')
a_data = np.zeros([batch_size,1],dtype='int32')
r_data = np.zeros([batch_size,1],dtype='int32')
term_data = np.zeros([batch_size,1],dtype='int32')
count = 0
while count < batch_size:
i,j,k,l,m,n = np.random.randint(0,7,1),np.random.randint(0,7,1),np.random.randint(0,7,1),np.random.randint(0,7,1),np.random.randint(0,7,1),np.random.randint(0,7,1)
a = np.random.randint(0,4,1)[0]
do = not bool(world[i,j]) and (not bool(world[k,l])) and (not bool(world[m,n]))
if do:
s = np.array([i,j,k,l,m,n]).flatten()
Env.set_state(s)
s1 , r , dead = Env.step([a])
s_data[count,] = s
s1_data[count,] = s1
a_data[count] = a
r_data[count] = r
term_data[count] = dead
count += 1
return s_data, a_data, s1_data, r_data, term_data
def make_test_data(sess,model,Env,batch_size,epsilon=0.05):
''' on policy test data '''
s_data = np.zeros([batch_size,Env.observation_shape],dtype='int32')
s1_data = np.zeros([batch_size,Env.observation_shape],dtype='int32')
a_data = np.zeros([batch_size,1],dtype='int32')
r_data = np.zeros([batch_size,1],dtype='int32')
term_data = np.zeros([batch_size,1],dtype='int32')
count = 0
s = Env.reset()
while count < batch_size:
Qsa = sess.run(model.Qsa, feed_dict = {model.x :s[None,:],
model.k : 1,
})
a = np.array([egreedy(Qsa[0],epsilon)])
s1 = sess.run(model.y_sample,{ model.x : s[None,:],
model.y : np.zeros(np.shape(s))[None,:],
model.a : a[:,None],
model.lamb : 1,
model.temp : 0.0001,
model.is_training : False,
model.k: 1})
s_data[count,] = s
a_data[count] = a
s1_data[count,] = s1
Env.set_state(s1)
term_data[count,] = dead = Env._check_dead()
if dead:
s = Env.reset()
else:
s = s1
count += 1
return s_data, a_data, s1_data, r_data, term_data
def plot_predictions(model,sess,n_row,n_col,run_id,hps,on_policy=False,s=np.array([0,0,1,3,5,3])):
world = np.zeros([7,7],dtype='int32')
world[1:6,1] = 1
world[1:3,4] = 1
world[4:6,4] = 1
fig = plt.figure()#figsize=(7,7),dpi=600,aspect='auto')
for row in range(n_row):
for col in range(n_col):
ax1 = fig.add_subplot(n_row,n_col,((n_col*col) + row + 1),aspect='equal')
# plot the environment
ax1.axis('off')
plt.xlim([-1,8])
plt.ylim([-1,8])
plot_predictions
for i in range(7):
for j in range(7):
if world[i,j]==1:
col = "black"
else:
col = "white"
ax1.add_patch(
patches.Rectangle(
(i,j),1,1,
#fill=False,
edgecolor='black',
linewidth = 2,
facecolor = col,),
)
# sample some state
do = False
if not on_policy:
while not do:
i,j,k,l,m,n = np.random.randint(0,7,1),np.random.randint(0,7,1),np.random.randint(0,7,1),np.random.randint(0,7,1),np.random.randint(0,7,1),np.random.randint(0,7,1)
a = np.random.randint(0,4,1)
do = not bool(world[i,j]) and (not bool(world[k,l])) and (not bool(world[m,n]))
s = np.array([i,j,k,l,m,n]).flatten()
else:
i,j,k,l,m,n = s
Qsa = sess.run(model.Qsa, feed_dict = {model.x :s[None,:],
model.k : 1,
})
a = np.array([egreedy(Qsa[0],0.01)])
# add the start
ax1.add_artist(plt.Circle((m+0.5,n+0.5),0.3,color='blue'))
ax1.add_artist(plt.Circle((k+0.5,l+0.5),0.3,color='red'))
ax1.add_artist(plt.Circle((i+0.5,j+0.5),0.3,color='green'))
if a == 0:
action = 'up'
elif a == 1:
action = 'down'
elif a == 2:
action = 'right'
elif a == 3:
action = 'left'
ax1.set_title(action)
trans = predict(model,sess,s,a)
for agent in range(3):
for i in range(7):
for j in range(7):
if trans[i,j,agent]>0:
if agent == 0:
col = 'green'
elif agent == 1:
col = 'red'
elif agent == 2:
col = 'blue'
ax1.add_patch(
patches.Rectangle(
(i,j),1,1,
fill=True,
alpha = trans[i,j,agent],
edgecolor='black',
linewidth = 2,
facecolor = col,),
)
if on_policy:
s1 = sess.run(model.y_sample,{ model.x : s[None,:],
model.y : np.zeros(np.shape(s[None,:])),
model.a : a[:,None],
model.lamb : 1,
model.temp : 0.0001,
model.is_training : False,
model.k: 1})
s = s1[0]
return s
def predict(model,sess,some_s,some_a,n_test_samp=200):
freq = np.zeros([7,7,3])
for m in range(n_test_samp):
y_sample = sess.run(model.y_sample,{ model.x : some_s[None,:],
model.y : np.zeros(np.shape(some_s))[None,:],
model.a : some_a[:,None],
model.lamb : 1,
model.temp : 0.0001,
model.is_training : False,
model.k: 1})
y_sample = y_sample.flatten()
freq[y_sample[0],y_sample[1],0] += 1
freq[y_sample[2],y_sample[3],1] += 1
freq[y_sample[4],y_sample[5],2] += 1
trans = freq/n_test_samp
return trans
def kl_preds_v2(model,sess,s_test,a_test,n_rep_per_item=200):
## Compare sample distribution to ground truth
Env = grid_env(False)
n_test_items,state_size = s_test.shape
distances = np.empty([n_test_items,3])
for i in range(n_test_items):
state = s_test[i,:].astype('int32')
action = np.round(a_test[i,:]).astype('int32')
# ground truth
state_truth = np.empty([n_rep_per_item,s_test.shape[1]])
for o in range(n_rep_per_item):
Env.set_state(state.flatten())
s1,r,dead = Env.step(action.flatten())
state_truth[o,:] = s1
truth_count,bins = np.histogramdd(state_truth,bins=[np.arange(8)-0.5]*state_size)
truth_prob = truth_count/n_rep_per_item
# predictions of model
y_sample = sess.run(model.y_sample,{ model.x : state[None,:].repeat(n_rep_per_item,axis=0),
model.y : np.zeros(np.shape(state[None,:])).repeat(n_rep_per_item,axis=0),
model.a : action[None,:].repeat(n_rep_per_item,axis=0),
model.Qtarget : np.zeros(np.shape(action[None,:])).repeat(n_rep_per_item,axis=0),
model.lr : 0,
model.lamb : 1,
model.temp : 0.00001,
model.is_training : False,
model.k: 1})
sample_count,bins = np.histogramdd(y_sample,bins=[np.arange(8)-0.5]*state_size)
sample_prob = sample_count/n_rep_per_item
distances[i,0]= np.sum(truth_prob*(np.log(truth_prob+1e-5)-np.log(sample_prob+1e-5))) # KL(p|p_tilde)
distances[i,1]= np.sum(sample_prob*(np.log(sample_prob+1e-5)-np.log(truth_prob+1e-5))) # Inverse KL(p_tilde|p)
distances[i,2]= norm(np.sqrt(truth_prob) - np.sqrt(sample_prob))/np.sqrt(2)
return np.mean(distances,axis=0) | mit |
cloudera/ibis | ibis/expr/datatypes.py | 1 | 51710 | import builtins
import collections
import datetime
import functools
import itertools
import numbers
import re
import typing
from typing import Any as GenericAny
from typing import (
Callable,
Iterator,
List,
Mapping,
NamedTuple,
Optional,
Sequence,
)
from typing import Set as GenericSet
from typing import Tuple, TypeVar, Union
import pandas as pd
import toolz
from multipledispatch import Dispatcher
import ibis.common.exceptions as com
import ibis.expr.types as ir
from ibis import util
IS_SHAPELY_AVAILABLE = False
try:
import shapely.geometry
IS_SHAPELY_AVAILABLE = True
except ImportError:
...
class DataType:
__slots__ = ('nullable',)
def __init__(self, nullable: bool = True) -> None:
self.nullable = nullable
def __call__(self, nullable: bool = True) -> 'DataType':
if nullable is not True and nullable is not False:
raise TypeError(
"__call__ only accepts the 'nullable' argument. "
"Please construct a new instance of the type to change the "
"values of the attributes."
)
return self._factory(nullable=nullable)
def _factory(self, nullable: bool = True) -> 'DataType':
slots = {
slot: getattr(self, slot)
for slot in self.__slots__
if slot != 'nullable'
}
return type(self)(nullable=nullable, **slots)
def __eq__(self, other) -> bool:
return self.equals(other)
def __ne__(self, other) -> bool:
return not (self == other)
def __hash__(self) -> int:
custom_parts = tuple(
getattr(self, slot)
for slot in toolz.unique(self.__slots__ + ('nullable',))
)
return hash((type(self),) + custom_parts)
def __repr__(self) -> str:
return '{}({})'.format(
self.name,
', '.join(
'{}={!r}'.format(slot, getattr(self, slot))
for slot in toolz.unique(self.__slots__ + ('nullable',))
),
)
def __str__(self) -> str:
return '{}{}'.format(
self.name.lower(), '[non-nullable]' if not self.nullable else ''
)
@property
def name(self) -> str:
return type(self).__name__
def equals(
self,
other: 'DataType',
cache: Optional[Mapping[GenericAny, bool]] = None,
) -> bool:
if isinstance(other, str):
raise TypeError(
'Comparing datatypes to strings is not allowed. Convert '
'{!r} to the equivalent DataType instance.'.format(other)
)
return (
isinstance(other, type(self))
and self.nullable == other.nullable
and self.__slots__ == other.__slots__
and all(
getattr(self, slot) == getattr(other, slot)
for slot in self.__slots__
)
)
def castable(self, target, **kwargs):
return castable(self, target, **kwargs)
def cast(self, target, **kwargs):
return cast(self, target, **kwargs)
def scalar_type(self):
return functools.partial(self.scalar, dtype=self)
def column_type(self):
return functools.partial(self.column, dtype=self)
def _literal_value_hash_key(self, value) -> int:
"""Return a hash for `value`."""
return self, value
class Any(DataType):
__slots__ = ()
class Primitive(DataType):
__slots__ = ()
def __repr__(self) -> str:
name = self.name.lower()
if not self.nullable:
return '{}[non-nullable]'.format(name)
return name
class Null(DataType):
scalar = ir.NullScalar
column = ir.NullColumn
__slots__ = ()
class Variadic(DataType):
__slots__ = ()
class Boolean(Primitive):
scalar = ir.BooleanScalar
column = ir.BooleanColumn
__slots__ = ()
Bounds = NamedTuple('Bounds', [('lower', int), ('upper', int)])
class Integer(Primitive):
scalar = ir.IntegerScalar
column = ir.IntegerColumn
__slots__ = ()
@property
def _nbytes(self) -> int:
raise TypeError(
"Cannot determine the size in bytes of an abstract integer type."
)
class String(Variadic):
"""A type representing a string.
Notes
-----
Because of differences in the way different backends handle strings, we
cannot assume that strings are UTF-8 encoded.
"""
scalar = ir.StringScalar
column = ir.StringColumn
__slots__ = ()
class Binary(Variadic):
"""A type representing a blob of bytes.
Notes
-----
Some databases treat strings and blobs of equally, and some do not. For
example, Impala doesn't make a distinction between string and binary types
but PostgreSQL has a TEXT type and a BYTEA type which are distinct types
that behave differently.
"""
scalar = ir.BinaryScalar
column = ir.BinaryColumn
__slots__ = ()
class Date(Primitive):
scalar = ir.DateScalar
column = ir.DateColumn
__slots__ = ()
class Time(Primitive):
scalar = ir.TimeScalar
column = ir.TimeColumn
__slots__ = ()
class Timestamp(DataType):
scalar = ir.TimestampScalar
column = ir.TimestampColumn
__slots__ = ('timezone',)
def __init__(
self, timezone: Optional[str] = None, nullable: bool = True
) -> None:
super().__init__(nullable=nullable)
self.timezone = timezone
def __str__(self) -> str:
timezone = self.timezone
typename = self.name.lower()
if timezone is None:
return typename
return '{}({!r})'.format(typename, timezone)
class SignedInteger(Integer):
@property
def largest(self):
return int64
@property
def bounds(self):
exp = self._nbytes * 8 - 1
upper = (1 << exp) - 1
return Bounds(lower=~upper, upper=upper)
class UnsignedInteger(Integer):
@property
def largest(self):
return uint64
@property
def bounds(self):
exp = self._nbytes * 8 - 1
upper = 1 << exp
return Bounds(lower=0, upper=upper)
class Floating(Primitive):
scalar = ir.FloatingScalar
column = ir.FloatingColumn
__slots__ = ()
@property
def largest(self):
return float64
@property
def _nbytes(self) -> int:
raise TypeError(
"Cannot determine the size in bytes of an abstract floating "
"point type."
)
class Int8(SignedInteger):
__slots__ = ()
_nbytes = 1
class Int16(SignedInteger):
__slots__ = ()
_nbytes = 2
class Int32(SignedInteger):
__slots__ = ()
_nbytes = 4
class Int64(SignedInteger):
__slots__ = ()
_nbytes = 8
class UInt8(UnsignedInteger):
__slots__ = ()
_nbytes = 1
class UInt16(UnsignedInteger):
__slots__ = ()
_nbytes = 2
class UInt32(UnsignedInteger):
__slots__ = ()
_nbytes = 4
class UInt64(UnsignedInteger):
__slots__ = ()
_nbytes = 8
class Float16(Floating):
__slots__ = ()
_nbytes = 2
class Float32(Floating):
__slots__ = ()
_nbytes = 4
class Float64(Floating):
__slots__ = ()
_nbytes = 8
Halffloat = Float16
Float = Float32
Double = Float64
class Decimal(DataType):
scalar = ir.DecimalScalar
column = ir.DecimalColumn
__slots__ = 'precision', 'scale'
def __init__(
self, precision: int, scale: int, nullable: bool = True
) -> None:
if not isinstance(precision, numbers.Integral):
raise TypeError('Decimal type precision must be an integer')
if not isinstance(scale, numbers.Integral):
raise TypeError('Decimal type scale must be an integer')
if precision < 0:
raise ValueError('Decimal type precision cannot be negative')
if not precision:
raise ValueError('Decimal type precision cannot be zero')
if scale < 0:
raise ValueError('Decimal type scale cannot be negative')
if precision < scale:
raise ValueError(
'Decimal type precision must be greater than or equal to '
'scale. Got precision={:d} and scale={:d}'.format(
precision, scale
)
)
super().__init__(nullable=nullable)
self.precision = precision # type: int
self.scale = scale # type: int
def __str__(self) -> str:
return '{}({:d}, {:d})'.format(
self.name.lower(), self.precision, self.scale
)
@property
def largest(self) -> 'Decimal':
return Decimal(38, self.scale)
class Interval(DataType):
scalar = ir.IntervalScalar
column = ir.IntervalColumn
__slots__ = 'value_type', 'unit'
# based on numpy's units
_units = {
'Y': 'year',
'Q': 'quarter',
'M': 'month',
'W': 'week',
'D': 'day',
'h': 'hour',
'm': 'minute',
's': 'second',
'ms': 'millisecond',
'us': 'microsecond',
'ns': 'nanosecond',
}
_timedelta_to_interval_units = {
'days': 'D',
'hours': 'h',
'minutes': 'm',
'seconds': 's',
'milliseconds': 'ms',
'microseconds': 'us',
'nanoseconds': 'ns',
}
def _convert_timedelta_unit_to_interval_unit(self, unit: str):
if unit not in self._timedelta_to_interval_units:
raise ValueError
return self._timedelta_to_interval_units[unit]
def __init__(
self,
unit: str = 's',
value_type: Integer = None,
nullable: bool = True,
) -> None:
super().__init__(nullable=nullable)
if unit not in self._units:
try:
unit = self._convert_timedelta_unit_to_interval_unit(unit)
except ValueError:
raise ValueError('Unsupported interval unit `{}`'.format(unit))
if value_type is None:
value_type = int32
else:
value_type = dtype(value_type)
if not isinstance(value_type, Integer):
raise TypeError("Interval's inner type must be an Integer subtype")
self.unit = unit
self.value_type = value_type
@property
def bounds(self):
return self.value_type.bounds
@property
def resolution(self):
"""Unit's name"""
return self._units[self.unit]
def __str__(self):
unit = self.unit
typename = self.name.lower()
value_type_name = self.value_type.name.lower()
return '{}<{}>(unit={!r})'.format(typename, value_type_name, unit)
class Category(DataType):
scalar = ir.CategoryScalar
column = ir.CategoryColumn
__slots__ = ('cardinality',)
def __init__(self, cardinality=None, nullable=True):
super().__init__(nullable=nullable)
self.cardinality = cardinality
def __repr__(self):
if self.cardinality is not None:
cardinality = self.cardinality
else:
cardinality = 'unknown'
return '{}(cardinality={!r})'.format(self.name, cardinality)
def to_integer_type(self):
# TODO: this should be removed I guess
if self.cardinality is None:
return int64
else:
return infer(self.cardinality)
class Struct(DataType):
scalar = ir.StructScalar
column = ir.StructColumn
__slots__ = 'names', 'types'
def __init__(
self, names: List[str], types: List[DataType], nullable: bool = True
) -> None:
"""Construct a ``Struct`` type from a `names` and `types`.
Parameters
----------
names : Sequence[str]
Sequence of strings indicating the name of each field in the
struct.
types : Sequence[Union[str, DataType]]
Sequence of strings or :class:`~ibis.expr.datatypes.DataType`
instances, one for each field
nullable : bool, optional
Whether the struct can be null
"""
if not (names and types):
raise ValueError('names and types must not be empty')
if len(names) != len(types):
raise ValueError('names and types must have the same length')
super().__init__(nullable=nullable)
self.names = names
self.types = types
@classmethod
def from_tuples(
cls,
pairs: Sequence[Tuple[str, Union[str, DataType]]],
nullable: bool = True,
) -> 'Struct':
names, types = zip(*pairs)
return cls(list(names), list(map(dtype, types)), nullable=nullable)
@classmethod
def from_dict(
cls, pairs: Mapping[str, Union[str, DataType]], nullable: bool = True,
) -> 'Struct':
names, types = pairs.keys(), pairs.values()
return cls(list(names), list(map(dtype, types)), nullable=nullable)
@property
def pairs(self) -> Mapping:
return collections.OrderedDict(zip(self.names, self.types))
def __getitem__(self, key: str) -> DataType:
return self.pairs[key]
def __hash__(self) -> int:
return hash(
(type(self), tuple(self.names), tuple(self.types), self.nullable)
)
def __repr__(self) -> str:
return '{}({}, nullable={})'.format(
self.name, list(self.pairs.items()), self.nullable
)
def __str__(self) -> str:
return '{}<{}>'.format(
self.name.lower(),
', '.join(itertools.starmap('{}: {}'.format, self.pairs.items())),
)
def _literal_value_hash_key(self, value):
return self, _tuplize(value.items())
def _tuplize(values):
"""Recursively convert `values` to a tuple of tuples."""
def tuplize_iter(values):
yield from (
tuple(tuplize_iter(value)) if util.is_iterable(value) else value
for value in values
)
return tuple(tuplize_iter(values))
class Array(Variadic):
scalar = ir.ArrayScalar
column = ir.ArrayColumn
__slots__ = ('value_type',)
def __init__(
self, value_type: Union[str, DataType], nullable: bool = True
) -> None:
super().__init__(nullable=nullable)
self.value_type = dtype(value_type)
def __str__(self) -> str:
return '{}<{}>'.format(self.name.lower(), self.value_type)
def _literal_value_hash_key(self, value):
return self, _tuplize(value)
class Set(Variadic):
scalar = ir.SetScalar
column = ir.SetColumn
__slots__ = ('value_type',)
def __init__(
self, value_type: Union[str, DataType], nullable: bool = True
) -> None:
super().__init__(nullable=nullable)
self.value_type = dtype(value_type)
def __str__(self) -> str:
return '{}<{}>'.format(self.name.lower(), self.value_type)
class Enum(DataType):
scalar = ir.EnumScalar
column = ir.EnumColumn
__slots__ = 'rep_type', 'value_type'
def __init__(
self, rep_type: DataType, value_type: DataType, nullable: bool = True
) -> None:
super().__init__(nullable=nullable)
self.rep_type = dtype(rep_type)
self.value_type = dtype(value_type)
class Map(Variadic):
scalar = ir.MapScalar
column = ir.MapColumn
__slots__ = 'key_type', 'value_type'
def __init__(
self, key_type: DataType, value_type: DataType, nullable: bool = True
) -> None:
super().__init__(nullable=nullable)
self.key_type = dtype(key_type)
self.value_type = dtype(value_type)
def __str__(self) -> str:
return '{}<{}, {}>'.format(
self.name.lower(), self.key_type, self.value_type
)
def _literal_value_hash_key(self, value):
return self, _tuplize(value.items())
class JSON(String):
"""JSON (JavaScript Object Notation) text format."""
scalar = ir.JSONScalar
column = ir.JSONColumn
class JSONB(Binary):
"""JSON (JavaScript Object Notation) data stored as a binary
representation, which eliminates whitespace, duplicate keys,
and key ordering.
"""
scalar = ir.JSONBScalar
column = ir.JSONBColumn
class GeoSpatial(DataType):
__slots__ = 'geotype', 'srid'
column = ir.GeoSpatialColumn
scalar = ir.GeoSpatialScalar
def __init__(
self, geotype: str = None, srid: int = None, nullable: bool = True
):
"""Geospatial data type base class
Parameters
----------
geotype : str
Specification of geospatial type which could be `geography` or
`geometry`.
srid : int
Spatial Reference System Identifier
nullable : bool, optional
Whether the struct can be null
"""
super().__init__(nullable=nullable)
if geotype not in (None, 'geometry', 'geography'):
raise ValueError(
'The `geotype` parameter should be `geometry` or `geography`'
)
self.geotype = geotype
self.srid = srid
def __str__(self) -> str:
geo_op = self.name.lower()
if self.geotype is not None:
geo_op += ':' + self.geotype
if self.srid is not None:
geo_op += ';' + str(self.srid)
return geo_op
def _literal_value_hash_key(self, value):
if IS_SHAPELY_AVAILABLE:
geo_shapes = (
shapely.geometry.Point,
shapely.geometry.LineString,
shapely.geometry.Polygon,
shapely.geometry.MultiLineString,
shapely.geometry.MultiPoint,
shapely.geometry.MultiPolygon,
)
if isinstance(value, geo_shapes):
return self, value.wkt
return self, value
class Geometry(GeoSpatial):
"""Geometry is used to cast from geography types."""
column = ir.GeoSpatialColumn
scalar = ir.GeoSpatialScalar
__slots__ = ()
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.geotype = 'geometry'
def __str__(self) -> str:
return self.name.lower()
class Geography(GeoSpatial):
"""Geography is used to cast from geometry types."""
column = ir.GeoSpatialColumn
scalar = ir.GeoSpatialScalar
__slots__ = ()
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.geotype = 'geography'
def __str__(self) -> str:
return self.name.lower()
class Point(GeoSpatial):
"""A point described by two coordinates."""
scalar = ir.PointScalar
column = ir.PointColumn
__slots__ = ()
class LineString(GeoSpatial):
"""A sequence of 2 or more points."""
scalar = ir.LineStringScalar
column = ir.LineStringColumn
__slots__ = ()
class Polygon(GeoSpatial):
"""A set of one or more rings (closed line strings), with the first
representing the shape (external ring) and the rest representing holes in
that shape (internal rings).
"""
scalar = ir.PolygonScalar
column = ir.PolygonColumn
__slots__ = ()
class MultiLineString(GeoSpatial):
"""A set of one or more line strings."""
scalar = ir.MultiLineStringScalar
column = ir.MultiLineStringColumn
__slots__ = ()
class MultiPoint(GeoSpatial):
"""A set of one or more points."""
scalar = ir.MultiPointScalar
column = ir.MultiPointColumn
__slots__ = ()
class MultiPolygon(GeoSpatial):
"""A set of one or more polygons."""
scalar = ir.MultiPolygonScalar
column = ir.MultiPolygonColumn
__slots__ = ()
class UUID(String):
"""A universally unique identifier (UUID) is a 128-bit number used to
identify information in computer systems.
"""
scalar = ir.UUIDScalar
column = ir.UUIDColumn
__slots__ = ()
class MACADDR(String):
"""Media Access Control (MAC) Address of a network interface."""
scalar = ir.MACADDRScalar
column = ir.MACADDRColumn
__slots__ = ()
class INET(String):
"""IP address type."""
scalar = ir.INETScalar
column = ir.INETColumn
__slots__ = ()
# ---------------------------------------------------------------------
any = Any()
null = Null()
boolean = Boolean()
int_ = Integer()
int8 = Int8()
int16 = Int16()
int32 = Int32()
int64 = Int64()
uint_ = UnsignedInteger()
uint8 = UInt8()
uint16 = UInt16()
uint32 = UInt32()
uint64 = UInt64()
float = Float()
halffloat = Halffloat()
float16 = Halffloat()
float32 = Float32()
float64 = Float64()
double = Double()
string = String()
binary = Binary()
date = Date()
time = Time()
timestamp = Timestamp()
interval = Interval()
category = Category()
# geo spatial data type
geometry = GeoSpatial()
geography = GeoSpatial()
point = Point()
linestring = LineString()
polygon = Polygon()
multilinestring = MultiLineString()
multipoint = MultiPoint()
multipolygon = MultiPolygon()
# json
json = JSON()
jsonb = JSONB()
# special string based data type
uuid = UUID()
macaddr = MACADDR()
inet = INET()
_primitive_types = [
('any', any),
('null', null),
('boolean', boolean),
('bool', boolean),
('int8', int8),
('int16', int16),
('int32', int32),
('int64', int64),
('uint8', uint8),
('uint16', uint16),
('uint32', uint32),
('uint64', uint64),
('float16', float16),
('float32', float32),
('float64', float64),
('float', float),
('halffloat', float16),
('double', double),
('string', string),
('binary', binary),
('date', date),
('time', time),
('timestamp', timestamp),
('interval', interval),
('category', category),
] # type: List[Tuple[str, DataType]]
class Tokens:
"""Class to hold tokens for lexing."""
__slots__ = ()
ANY = 0
NULL = 1
PRIMITIVE = 2
DECIMAL = 3
VARCHAR = 4
CHAR = 5
ARRAY = 6
MAP = 7
STRUCT = 8
INTEGER = 9
FIELD = 10
COMMA = 11
COLON = 12
LPAREN = 13
RPAREN = 14
LBRACKET = 15
RBRACKET = 16
STRARG = 17
TIMESTAMP = 18
TIME = 19
INTERVAL = 20
SET = 21
GEOGRAPHY = 22
GEOMETRY = 23
POINT = 24
LINESTRING = 25
POLYGON = 26
MULTILINESTRING = 27
MULTIPOINT = 28
MULTIPOLYGON = 29
SEMICOLON = 30
JSON = 31
JSONB = 32
UUID = 33
MACADDR = 34
INET = 35
@staticmethod
def name(value):
return _token_names[value]
_token_names = {
getattr(Tokens, n): n for n in dir(Tokens) if n.isalpha() and n.isupper()
}
Token = collections.namedtuple('Token', ('type', 'value'))
# Adapted from tokenize.String
_STRING_REGEX = """('[^\n'\\\\]*(?:\\\\.[^\n'\\\\]*)*'|"[^\n"\\\\"]*(?:\\\\.[^\n"\\\\]*)*")""" # noqa: E501
Action = Optional[Callable[[str], Token]]
_TYPE_RULES = collections.OrderedDict(
[
# any, null, bool|boolean
('(?P<ANY>any)', lambda token: Token(Tokens.ANY, any)),
('(?P<NULL>null)', lambda token: Token(Tokens.NULL, null)),
(
'(?P<BOOLEAN>bool(?:ean)?)',
typing.cast(
Action, lambda token: Token(Tokens.PRIMITIVE, boolean)
),
),
]
+ [
# primitive types
(
'(?P<{}>{})'.format(token.upper(), token),
typing.cast(
Action,
lambda token, value=value: Token(Tokens.PRIMITIVE, value),
),
)
for token, value in _primitive_types
if token
not in {'any', 'null', 'timestamp', 'time', 'interval', 'boolean'}
]
+ [
# timestamp
(
r'(?P<TIMESTAMP>timestamp)',
lambda token: Token(Tokens.TIMESTAMP, token),
)
]
+ [
# interval - should remove?
(
r'(?P<INTERVAL>interval)',
lambda token: Token(Tokens.INTERVAL, token),
)
]
+ [
# time
(r'(?P<TIME>time)', lambda token: Token(Tokens.TIME, token))
]
+ [
# decimal + complex types
(
'(?P<{}>{})'.format(token.upper(), token),
typing.cast(
Action, lambda token, toktype=toktype: Token(toktype, token)
),
)
for token, toktype in zip(
(
'decimal',
'varchar',
'char',
'array',
'set',
'map',
'struct',
'interval',
),
(
Tokens.DECIMAL,
Tokens.VARCHAR,
Tokens.CHAR,
Tokens.ARRAY,
Tokens.SET,
Tokens.MAP,
Tokens.STRUCT,
Tokens.INTERVAL,
),
)
]
+ [
# geo spatial data type
(
'(?P<{}>{})'.format(token.upper(), token),
lambda token, toktype=toktype: Token(toktype, token),
)
for token, toktype in zip(
(
'geometry',
'geography',
'point',
'linestring',
'polygon',
'multilinestring',
'multipoint',
'multipolygon',
),
(
Tokens.GEOMETRY,
Tokens.GEOGRAPHY,
Tokens.POINT,
Tokens.LINESTRING,
Tokens.POLYGON,
Tokens.MULTILINESTRING,
Tokens.MULTIPOINT,
Tokens.MULTIPOLYGON,
),
)
]
+ [
# json data type
(
'(?P<{}>{})'.format(token.upper(), token),
lambda token, toktype=toktype: Token(toktype, token),
)
for token, toktype in zip(
# note: `jsonb` should be first to avoid conflict with `json`
('jsonb', 'json'),
(Tokens.JSONB, Tokens.JSON),
)
]
+ [
# special string based data types
('(?P<UUID>uuid)', lambda token: Token(Tokens.UUID, token)),
('(?P<MACADDR>macaddr)', lambda token: Token(Tokens.MACADDR, token)),
('(?P<INET>inet)', lambda token: Token(Tokens.INET, token)),
]
+ [
# integers, for decimal spec
(r'(?P<INTEGER>\d+)', lambda token: Token(Tokens.INTEGER, int(token))),
# struct fields
(
r'(?P<FIELD>[a-zA-Z_][a-zA-Z_0-9]*)',
lambda token: Token(Tokens.FIELD, token),
),
# timezones
('(?P<COMMA>,)', lambda token: Token(Tokens.COMMA, token)),
('(?P<COLON>:)', lambda token: Token(Tokens.COLON, token)),
('(?P<SEMICOLON>;)', lambda token: Token(Tokens.SEMICOLON, token)),
(r'(?P<LPAREN>\()', lambda token: Token(Tokens.LPAREN, token)),
(r'(?P<RPAREN>\))', lambda token: Token(Tokens.RPAREN, token)),
('(?P<LBRACKET><)', lambda token: Token(Tokens.LBRACKET, token)),
('(?P<RBRACKET>>)', lambda token: Token(Tokens.RBRACKET, token)),
(r'(?P<WHITESPACE>\s+)', None),
(
'(?P<STRARG>{})'.format(_STRING_REGEX),
lambda token: Token(Tokens.STRARG, token),
),
]
)
_TYPE_KEYS = tuple(_TYPE_RULES.keys())
_TYPE_PATTERN = re.compile('|'.join(_TYPE_KEYS), flags=re.IGNORECASE)
def _generate_tokens(pat: GenericAny, text: str) -> Iterator[Token]:
"""Generate a sequence of tokens from `text` that match `pat`
Parameters
----------
pat : compiled regex
The pattern to use for tokenization
text : str
The text to tokenize
"""
rules = _TYPE_RULES
keys = _TYPE_KEYS
groupindex = pat.groupindex
scanner = pat.scanner(text)
for m in iter(scanner.match, None):
lastgroup = m.lastgroup
func = rules[keys[groupindex[lastgroup] - 1]]
if func is not None:
yield func(m.group(lastgroup))
class TypeParser:
"""A type parser for complex types.
Parameters
----------
text : str
The text to parse
Notes
-----
Adapted from David Beazley's and Brian Jones's Python Cookbook
"""
__slots__ = 'text', 'tokens', 'tok', 'nexttok'
def __init__(self, text: str) -> None:
self.text = text # type: str
self.tokens = _generate_tokens(_TYPE_PATTERN, text)
self.tok = None # type: Optional[Token]
self.nexttok = None # type: Optional[Token]
def _advance(self) -> None:
self.tok, self.nexttok = self.nexttok, next(self.tokens, None)
def _accept(self, toktype: int) -> bool:
if self.nexttok is not None and self.nexttok.type == toktype:
self._advance()
assert (
self.tok is not None
), 'self.tok should not be None when _accept succeeds'
return True
return False
def _expect(self, toktype: int) -> None:
if not self._accept(toktype):
raise SyntaxError(
'Expected {} after {!r} in {!r}'.format(
Tokens.name(toktype),
getattr(self.tok, 'value', self.tok),
self.text,
)
)
def parse(self) -> DataType:
self._advance()
# any and null types cannot be nested
if self._accept(Tokens.ANY) or self._accept(Tokens.NULL):
assert (
self.tok is not None
), 'self.tok was None when parsing ANY or NULL type'
return self.tok.value
t = self.type()
if self.nexttok is None:
return t
else:
# additional junk was passed at the end, throw an error
additional_tokens = []
while self.nexttok is not None:
additional_tokens.append(self.nexttok.value)
self._advance()
raise SyntaxError(
'Found additional tokens {}'.format(additional_tokens)
)
def type(self) -> DataType:
"""
type : primitive
| decimal
| array
| set
| map
| struct
primitive : "any"
| "null"
| "bool"
| "boolean"
| "int8"
| "int16"
| "int32"
| "int64"
| "uint8"
| "uint16"
| "uint32"
| "uint64"
| "halffloat"
| "float"
| "double"
| "float16"
| "float32"
| "float64"
| "string"
| "time"
timestamp : "timestamp"
| "timestamp" "(" timezone ")"
interval : "interval"
| "interval" "(" unit ")"
| "interval" "<" type ">" "(" unit ")"
decimal : "decimal"
| "decimal" "(" integer "," integer ")"
integer : [0-9]+
array : "array" "<" type ">"
set : "set" "<" type ">"
map : "map" "<" type "," type ">"
struct : "struct" "<" field ":" type ("," field ":" type)* ">"
field : [a-zA-Z_][a-zA-Z_0-9]*
geography: "geography"
geometry: "geometry"
point : "point"
| "point" ";" srid
| "point" ":" geotype
| "point" ";" srid ":" geotype
linestring : "linestring"
| "linestring" ";" srid
| "linestring" ":" geotype
| "linestring" ";" srid ":" geotype
polygon : "polygon"
| "polygon" ";" srid
| "polygon" ":" geotype
| "polygon" ";" srid ":" geotype
multilinestring : "multilinestring"
| "multilinestring" ";" srid
| "multilinestring" ":" geotype
| "multilinestring" ";" srid ":" geotype
multipoint : "multipoint"
| "multipoint" ";" srid
| "multipoint" ":" geotype
| "multipoint" ";" srid ":" geotype
multipolygon : "multipolygon"
| "multipolygon" ";" srid
| "multipolygon" ":" geotype
| "multipolygon" ";" srid ":" geotype
json : "json"
jsonb : "jsonb"
uuid : "uuid"
macaddr : "macaddr"
inet : "inet"
"""
if self._accept(Tokens.PRIMITIVE):
assert self.tok is not None
return self.tok.value
elif self._accept(Tokens.TIMESTAMP):
if self._accept(Tokens.LPAREN):
self._expect(Tokens.STRARG)
assert self.tok is not None
timezone = self.tok.value[1:-1] # remove surrounding quotes
self._expect(Tokens.RPAREN)
return Timestamp(timezone=timezone)
return timestamp
elif self._accept(Tokens.TIME):
return Time()
elif self._accept(Tokens.INTERVAL):
if self._accept(Tokens.LBRACKET):
self._expect(Tokens.PRIMITIVE)
assert self.tok is not None
value_type = self.tok.value
self._expect(Tokens.RBRACKET)
else:
value_type = int32
if self._accept(Tokens.LPAREN):
self._expect(Tokens.STRARG)
assert self.tok is not None
unit = self.tok.value[1:-1] # remove surrounding quotes
self._expect(Tokens.RPAREN)
else:
unit = 's'
return Interval(unit, value_type)
elif self._accept(Tokens.DECIMAL):
if self._accept(Tokens.LPAREN):
self._expect(Tokens.INTEGER)
assert self.tok is not None
precision = self.tok.value
self._expect(Tokens.COMMA)
self._expect(Tokens.INTEGER)
scale = self.tok.value
self._expect(Tokens.RPAREN)
else:
precision = 9
scale = 0
return Decimal(precision, scale)
elif self._accept(Tokens.VARCHAR) or self._accept(Tokens.CHAR):
# VARCHAR, VARCHAR(n), CHAR, and CHAR(n) all parse as STRING
if self._accept(Tokens.LPAREN):
self._expect(Tokens.INTEGER)
self._expect(Tokens.RPAREN)
return string
return string
elif self._accept(Tokens.ARRAY):
self._expect(Tokens.LBRACKET)
value_type = self.type()
self._expect(Tokens.RBRACKET)
return Array(value_type)
elif self._accept(Tokens.SET):
self._expect(Tokens.LBRACKET)
value_type = self.type()
self._expect(Tokens.RBRACKET)
return Set(value_type)
elif self._accept(Tokens.MAP):
self._expect(Tokens.LBRACKET)
self._expect(Tokens.PRIMITIVE)
assert self.tok is not None
key_type = self.tok.value
self._expect(Tokens.COMMA)
value_type = self.type()
self._expect(Tokens.RBRACKET)
return Map(key_type, value_type)
elif self._accept(Tokens.STRUCT):
self._expect(Tokens.LBRACKET)
self._expect(Tokens.FIELD)
assert self.tok is not None
names = [self.tok.value]
self._expect(Tokens.COLON)
types = [self.type()]
while self._accept(Tokens.COMMA):
self._expect(Tokens.FIELD)
names.append(self.tok.value)
self._expect(Tokens.COLON)
types.append(self.type())
self._expect(Tokens.RBRACKET)
return Struct(names, types)
# json data types
elif self._accept(Tokens.JSON):
return JSON()
elif self._accept(Tokens.JSONB):
return JSONB()
# geo spatial data type
elif self._accept(Tokens.GEOMETRY):
return Geometry()
elif self._accept(Tokens.GEOGRAPHY):
return Geography()
elif self._accept(Tokens.POINT):
geotype = None
srid = None
if self._accept(Tokens.SEMICOLON):
self._expect(Tokens.INTEGER)
assert self.tok is not None
srid = self.tok.value
if self._accept(Tokens.COLON):
if self._accept(Tokens.GEOGRAPHY):
geotype = 'geography'
elif self._accept(Tokens.GEOMETRY):
geotype = 'geometry'
return Point(geotype=geotype, srid=srid)
elif self._accept(Tokens.LINESTRING):
geotype = None
srid = None
if self._accept(Tokens.SEMICOLON):
self._expect(Tokens.INTEGER)
assert self.tok is not None
srid = self.tok.value
if self._accept(Tokens.COLON):
if self._accept(Tokens.GEOGRAPHY):
geotype = 'geography'
elif self._accept(Tokens.GEOMETRY):
geotype = 'geometry'
return LineString(geotype=geotype, srid=srid)
elif self._accept(Tokens.POLYGON):
geotype = None
srid = None
if self._accept(Tokens.SEMICOLON):
self._expect(Tokens.INTEGER)
assert self.tok is not None
srid = self.tok.value
if self._accept(Tokens.COLON):
if self._accept(Tokens.GEOGRAPHY):
geotype = 'geography'
elif self._accept(Tokens.GEOMETRY):
geotype = 'geometry'
return Polygon(geotype=geotype, srid=srid)
elif self._accept(Tokens.MULTILINESTRING):
geotype = None
srid = None
if self._accept(Tokens.SEMICOLON):
self._expect(Tokens.INTEGER)
assert self.tok is not None
srid = self.tok.value
if self._accept(Tokens.COLON):
if self._accept(Tokens.GEOGRAPHY):
geotype = 'geography'
elif self._accept(Tokens.GEOMETRY):
geotype = 'geometry'
return MultiLineString(geotype=geotype, srid=srid)
elif self._accept(Tokens.MULTIPOINT):
geotype = None
srid = None
if self._accept(Tokens.SEMICOLON):
self._expect(Tokens.INTEGER)
assert self.tok is not None
srid = self.tok.value
if self._accept(Tokens.COLON):
if self._accept(Tokens.GEOGRAPHY):
geotype = 'geography'
elif self._accept(Tokens.GEOMETRY):
geotype = 'geometry'
return MultiPoint(geotype=geotype, srid=srid)
elif self._accept(Tokens.MULTIPOLYGON):
geotype = None
srid = None
if self._accept(Tokens.SEMICOLON):
self._expect(Tokens.INTEGER)
assert self.tok is not None
srid = self.tok.value
if self._accept(Tokens.COLON):
if self._accept(Tokens.GEOGRAPHY):
geotype = 'geography'
elif self._accept(Tokens.GEOMETRY):
geotype = 'geometry'
return MultiPolygon(geotype=geotype, srid=srid)
# special string based data types
elif self._accept(Tokens.UUID):
return UUID()
elif self._accept(Tokens.MACADDR):
return MACADDR()
elif self._accept(Tokens.INET):
return INET()
else:
raise SyntaxError('Type cannot be parsed: {}'.format(self.text))
dtype = Dispatcher('dtype')
validate_type = dtype
def _get_timedelta_units(timedelta: datetime.timedelta) -> List[str]:
# pandas Timedelta has more granularity
if hasattr(timedelta, 'components'):
unit_fields = timedelta.components._fields
base_object = timedelta.components
# datetime.timedelta only stores days, seconds, and microseconds internally
else:
unit_fields = ['days', 'seconds', 'microseconds']
base_object = timedelta
time_units = []
[
time_units.append(field)
for field in unit_fields
if getattr(base_object, field) > 0
]
return time_units
@dtype.register(object)
def default(value, **kwargs) -> DataType:
raise com.IbisTypeError('Value {!r} is not a valid datatype'.format(value))
@dtype.register(DataType)
def from_ibis_dtype(value: DataType) -> DataType:
return value
@dtype.register(str)
def from_string(value: str) -> DataType:
try:
return TypeParser(value).parse()
except SyntaxError:
raise com.IbisTypeError(
'{!r} cannot be parsed as a datatype'.format(value)
)
@dtype.register(list)
def from_list(values: List[GenericAny]) -> Array:
if not values:
return Array(null)
return Array(highest_precedence(map(dtype, values)))
@dtype.register(collections.abc.Set)
def from_set(values: GenericSet) -> Set:
if not values:
return Set(null)
return Set(highest_precedence(map(dtype, values)))
infer = Dispatcher('infer')
def higher_precedence(left: DataType, right: DataType) -> DataType:
if castable(left, right, upcast=True):
return right
elif castable(right, left, upcast=True):
return left
raise com.IbisTypeError(
'Cannot compute precedence for {} and {} types'.format(left, right)
)
def highest_precedence(dtypes: Iterator[DataType]) -> DataType:
"""Compute the highest precedence of `dtypes`."""
return functools.reduce(higher_precedence, dtypes)
@infer.register(object)
def infer_dtype_default(value: GenericAny) -> DataType:
"""Default implementation of :func:`~ibis.expr.datatypes.infer`."""
raise com.InputTypeError(value)
@infer.register(collections.OrderedDict)
def infer_struct(value: Mapping[str, GenericAny]) -> Struct:
"""Infer the :class:`~ibis.expr.datatypes.Struct` type of `value`."""
if not value:
raise TypeError('Empty struct type not supported')
return Struct(list(value.keys()), list(map(infer, value.values())))
@infer.register(collections.abc.Mapping)
def infer_map(value: Mapping[GenericAny, GenericAny]) -> Map:
"""Infer the :class:`~ibis.expr.datatypes.Map` type of `value`."""
if not value:
return Map(null, null)
return Map(
highest_precedence(map(infer, value.keys())),
highest_precedence(map(infer, value.values())),
)
@infer.register(list)
def infer_list(values: List[GenericAny]) -> Array:
"""Infer the :class:`~ibis.expr.datatypes.Array` type of `values`."""
if not values:
return Array(null)
return Array(highest_precedence(map(infer, values)))
@infer.register((set, frozenset))
def infer_set(values: GenericSet) -> Set:
"""Infer the :class:`~ibis.expr.datatypes.Set` type of `values`."""
if not values:
return Set(null)
return Set(highest_precedence(map(infer, values)))
@infer.register(datetime.time)
def infer_time(value: datetime.time) -> Time:
return time
@infer.register(datetime.date)
def infer_date(value: datetime.date) -> Date:
return date
@infer.register(datetime.datetime)
def infer_timestamp(value: datetime.datetime) -> Timestamp:
if value.tzinfo:
return Timestamp(timezone=str(value.tzinfo))
else:
return timestamp
@infer.register(datetime.timedelta)
def infer_interval(value: datetime.timedelta) -> Interval:
time_units = _get_timedelta_units(value)
# we can attempt a conversion in the simplest case, i.e. there is exactly
# one unit (e.g. pd.Timedelta('2 days') vs. pd.Timedelta('2 days 3 hours')
if len(time_units) == 1:
unit = time_units[0]
return Interval(unit)
else:
return interval
@infer.register(str)
def infer_string(value: str) -> String:
return string
@infer.register(builtins.float)
def infer_floating(value: builtins.float) -> Double:
return double
@infer.register(int)
def infer_integer(value: int, allow_overflow: bool = False) -> Integer:
for dtype in (int8, int16, int32, int64):
if dtype.bounds.lower <= value <= dtype.bounds.upper:
return dtype
if not allow_overflow:
raise OverflowError(value)
return int64
@infer.register(bool)
def infer_boolean(value: bool) -> Boolean:
return boolean
@infer.register((type(None), Null))
def infer_null(value: Optional[Null]) -> Null:
return null
if IS_SHAPELY_AVAILABLE:
@infer.register(shapely.geometry.Point)
def infer_shapely_point(value: shapely.geometry.Point) -> Point:
return point
@infer.register(shapely.geometry.LineString)
def infer_shapely_linestring(
value: shapely.geometry.LineString,
) -> LineString:
return linestring
@infer.register(shapely.geometry.Polygon)
def infer_shapely_polygon(value: shapely.geometry.Polygon) -> Polygon:
return polygon
@infer.register(shapely.geometry.MultiLineString)
def infer_shapely_multilinestring(
value: shapely.geometry.MultiLineString,
) -> MultiLineString:
return multilinestring
@infer.register(shapely.geometry.MultiPoint)
def infer_shapely_multipoint(
value: shapely.geometry.MultiPoint,
) -> MultiPoint:
return multipoint
@infer.register(shapely.geometry.MultiPolygon)
def infer_shapely_multipolygon(
value: shapely.geometry.MultiPolygon,
) -> MultiPolygon:
return multipolygon
castable = Dispatcher('castable')
@castable.register(DataType, DataType)
def can_cast_subtype(source: DataType, target: DataType, **kwargs) -> bool:
return isinstance(target, type(source))
@castable.register(Any, DataType)
@castable.register(DataType, Any)
@castable.register(Any, Any)
@castable.register(Null, Any)
@castable.register(Integer, Category)
@castable.register(Integer, (Floating, Decimal))
@castable.register(Floating, Decimal)
@castable.register((Date, Timestamp), (Date, Timestamp))
def can_cast_any(source: DataType, target: DataType, **kwargs) -> bool:
return True
@castable.register(Null, DataType)
def can_cast_null(source: DataType, target: DataType, **kwargs) -> bool:
return target.nullable
Integral = TypeVar('Integral', SignedInteger, UnsignedInteger)
@castable.register(SignedInteger, UnsignedInteger)
@castable.register(UnsignedInteger, SignedInteger)
def can_cast_to_differently_signed_integer_type(
source: Integral, target: Integral, value: Optional[int] = None, **kwargs
) -> bool:
if value is None:
return False
bounds = target.bounds
return bounds.lower <= value <= bounds.upper
@castable.register(SignedInteger, SignedInteger)
@castable.register(UnsignedInteger, UnsignedInteger)
def can_cast_integers(source: Integral, target: Integral, **kwargs) -> bool:
return target._nbytes >= source._nbytes
@castable.register(Floating, Floating)
def can_cast_floats(
source: Floating, target: Floating, upcast: bool = False, **kwargs
) -> bool:
if upcast:
return target._nbytes >= source._nbytes
# double -> float must be allowed because
# float literals are inferred as doubles
return True
@castable.register(Decimal, Decimal)
def can_cast_decimals(source: Decimal, target: Decimal, **kwargs) -> bool:
return (
target.precision >= source.precision and target.scale >= source.scale
)
@castable.register(Interval, Interval)
def can_cast_intervals(source: Interval, target: Interval, **kwargs) -> bool:
return source.unit == target.unit and castable(
source.value_type, target.value_type
)
@castable.register(Integer, Boolean)
def can_cast_integer_to_boolean(
source: Integer, target: Boolean, value: Optional[int] = None, **kwargs
) -> bool:
return value is not None and (value == 0 or value == 1)
@castable.register(Integer, Interval)
def can_cast_integer_to_interval(
source: Interval, target: Interval, **kwargs
) -> bool:
return castable(source, target.value_type)
@castable.register(String, (Date, Time, Timestamp))
def can_cast_string_to_temporal(
source: String,
target: Union[Date, Time, Timestamp],
value: Optional[str] = None,
**kwargs,
) -> bool:
if value is None:
return False
try:
pd.Timestamp(value)
except ValueError:
return False
else:
return True
Collection = TypeVar('Collection', Array, Set)
@castable.register(Array, Array)
@castable.register(Set, Set)
def can_cast_variadic(
source: Collection, target: Collection, **kwargs
) -> bool:
return castable(source.value_type, target.value_type)
@castable.register(JSON, JSON)
def can_cast_json(source, target, **kwargs):
return True
@castable.register(JSONB, JSONB)
def can_cast_jsonb(source, target, **kwargs):
return True
# geo spatial data type
# cast between same type, used to cast from/to geometry and geography
GEO_TYPES = (
Point,
LineString,
Polygon,
MultiLineString,
MultiPoint,
MultiPolygon,
)
@castable.register(Array, GEO_TYPES)
@castable.register(GEO_TYPES, Geometry)
@castable.register(GEO_TYPES, Geography)
def can_cast_geospatial(source, target, **kwargs):
return True
@castable.register(UUID, UUID)
@castable.register(MACADDR, MACADDR)
@castable.register(INET, INET)
def can_cast_special_string(source, target, **kwargs):
return True
# @castable.register(Map, Map)
# def can_cast_maps(source, target):
# return (source.equals(target) or
# source.equals(Map(null, null)) or
# source.equals(Map(any, any)))
# TODO cast category
def cast(
source: Union[DataType, str], target: Union[DataType, str], **kwargs
) -> DataType:
"""Attempts to implicitly cast from source dtype to target dtype"""
source, result_target = dtype(source), dtype(target)
if not castable(source, result_target, **kwargs):
raise com.IbisTypeError(
'Datatype {} cannot be implicitly '
'casted to {}'.format(source, result_target)
)
return result_target
same_kind = Dispatcher(
'same_kind',
doc="""\
Compute whether two :class:`~ibis.expr.datatypes.DataType` instances are the
same kind.
Parameters
----------
a : DataType
b : DataType
Returns
-------
bool
Whether two :class:`~ibis.expr.datatypes.DataType` instances are the same
kind.
""",
)
@same_kind.register(DataType, DataType)
def same_kind_default(a: DataType, b: DataType) -> bool:
"""Return whether `a` is exactly equiavlent to `b`"""
return a.equals(b)
Numeric = TypeVar('Numeric', Integer, Floating)
@same_kind.register(Integer, Integer)
@same_kind.register(Floating, Floating)
def same_kind_numeric(a: Numeric, b: Numeric) -> bool:
"""Return ``True``."""
return True
@same_kind.register(DataType, Null)
def same_kind_right_null(a: DataType, _: Null) -> bool:
"""Return whether `a` is nullable."""
return a.nullable
@same_kind.register(Null, DataType)
def same_kind_left_null(_: Null, b: DataType) -> bool:
"""Return whether `b` is nullable."""
return b.nullable
@same_kind.register(Null, Null)
def same_kind_both_null(a: Null, b: Null) -> bool:
"""Return ``True``."""
return True
| apache-2.0 |
bhargav/scikit-learn | sklearn/manifold/locally_linear.py | 37 | 25852 | """Locally Linear Embedding"""
# Author: Fabian Pedregosa -- <[email protected]>
# Jake Vanderplas -- <[email protected]>
# License: BSD 3 clause (C) INRIA 2011
import numpy as np
from scipy.linalg import eigh, svd, qr, solve
from scipy.sparse import eye, csr_matrix
from ..base import BaseEstimator, TransformerMixin
from ..utils import check_random_state, check_array
from ..utils.arpack import eigsh
from ..utils.validation import check_is_fitted
from ..utils.validation import FLOAT_DTYPES
from ..neighbors import NearestNeighbors
def barycenter_weights(X, Z, reg=1e-3):
"""Compute barycenter weights of X from Y along the first axis
We estimate the weights to assign to each point in Y[i] to recover
the point X[i]. The barycenter weights sum to 1.
Parameters
----------
X : array-like, shape (n_samples, n_dim)
Z : array-like, shape (n_samples, n_neighbors, n_dim)
reg: float, optional
amount of regularization to add for the problem to be
well-posed in the case of n_neighbors > n_dim
Returns
-------
B : array-like, shape (n_samples, n_neighbors)
Notes
-----
See developers note for more information.
"""
X = check_array(X, dtype=FLOAT_DTYPES)
Z = check_array(Z, dtype=FLOAT_DTYPES, allow_nd=True)
n_samples, n_neighbors = X.shape[0], Z.shape[1]
B = np.empty((n_samples, n_neighbors), dtype=X.dtype)
v = np.ones(n_neighbors, dtype=X.dtype)
# this might raise a LinalgError if G is singular and has trace
# zero
for i, A in enumerate(Z.transpose(0, 2, 1)):
C = A.T - X[i] # broadcasting
G = np.dot(C, C.T)
trace = np.trace(G)
if trace > 0:
R = reg * trace
else:
R = reg
G.flat[::Z.shape[1] + 1] += R
w = solve(G, v, sym_pos=True)
B[i, :] = w / np.sum(w)
return B
def barycenter_kneighbors_graph(X, n_neighbors, reg=1e-3, n_jobs=1):
"""Computes the barycenter weighted graph of k-Neighbors for points in X
Parameters
----------
X : {array-like, sparse matrix, BallTree, KDTree, NearestNeighbors}
Sample data, shape = (n_samples, n_features), in the form of a
numpy array, sparse array, precomputed tree, or NearestNeighbors
object.
n_neighbors : int
Number of neighbors for each sample.
reg : float, optional
Amount of regularization when solving the least-squares
problem. Only relevant if mode='barycenter'. If None, use the
default.
n_jobs : int, optional (default = 1)
The number of parallel jobs to run for neighbors search.
If ``-1``, then the number of jobs is set to the number of CPU cores.
Returns
-------
A : sparse matrix in CSR format, shape = [n_samples, n_samples]
A[i, j] is assigned the weight of edge that connects i to j.
See also
--------
sklearn.neighbors.kneighbors_graph
sklearn.neighbors.radius_neighbors_graph
"""
knn = NearestNeighbors(n_neighbors + 1, n_jobs=n_jobs).fit(X)
X = knn._fit_X
n_samples = X.shape[0]
ind = knn.kneighbors(X, return_distance=False)[:, 1:]
data = barycenter_weights(X, X[ind], reg=reg)
indptr = np.arange(0, n_samples * n_neighbors + 1, n_neighbors)
return csr_matrix((data.ravel(), ind.ravel(), indptr),
shape=(n_samples, n_samples))
def null_space(M, k, k_skip=1, eigen_solver='arpack', tol=1E-6, max_iter=100,
random_state=None):
"""
Find the null space of a matrix M.
Parameters
----------
M : {array, matrix, sparse matrix, LinearOperator}
Input covariance matrix: should be symmetric positive semi-definite
k : integer
Number of eigenvalues/vectors to return
k_skip : integer, optional
Number of low eigenvalues to skip.
eigen_solver : string, {'auto', 'arpack', 'dense'}
auto : algorithm will attempt to choose the best method for input data
arpack : use arnoldi iteration in shift-invert mode.
For this method, M may be a dense matrix, sparse matrix,
or general linear operator.
Warning: ARPACK can be unstable for some problems. It is
best to try several random seeds in order to check results.
dense : use standard dense matrix operations for the eigenvalue
decomposition. For this method, M must be an array
or matrix type. This method should be avoided for
large problems.
tol : float, optional
Tolerance for 'arpack' method.
Not used if eigen_solver=='dense'.
max_iter : maximum number of iterations for 'arpack' method
not used if eigen_solver=='dense'
random_state: numpy.RandomState or int, optional
The generator or seed used to determine the starting vector for arpack
iterations. Defaults to numpy.random.
"""
if eigen_solver == 'auto':
if M.shape[0] > 200 and k + k_skip < 10:
eigen_solver = 'arpack'
else:
eigen_solver = 'dense'
if eigen_solver == 'arpack':
random_state = check_random_state(random_state)
# initialize with [-1,1] as in ARPACK
v0 = random_state.uniform(-1, 1, M.shape[0])
try:
eigen_values, eigen_vectors = eigsh(M, k + k_skip, sigma=0.0,
tol=tol, maxiter=max_iter,
v0=v0)
except RuntimeError as msg:
raise ValueError("Error in determining null-space with ARPACK. "
"Error message: '%s'. "
"Note that method='arpack' can fail when the "
"weight matrix is singular or otherwise "
"ill-behaved. method='dense' is recommended. "
"See online documentation for more information."
% msg)
return eigen_vectors[:, k_skip:], np.sum(eigen_values[k_skip:])
elif eigen_solver == 'dense':
if hasattr(M, 'toarray'):
M = M.toarray()
eigen_values, eigen_vectors = eigh(
M, eigvals=(k_skip, k + k_skip - 1), overwrite_a=True)
index = np.argsort(np.abs(eigen_values))
return eigen_vectors[:, index], np.sum(eigen_values)
else:
raise ValueError("Unrecognized eigen_solver '%s'" % eigen_solver)
def locally_linear_embedding(
X, n_neighbors, n_components, reg=1e-3, eigen_solver='auto', tol=1e-6,
max_iter=100, method='standard', hessian_tol=1E-4, modified_tol=1E-12,
random_state=None, n_jobs=1):
"""Perform a Locally Linear Embedding analysis on the data.
Read more in the :ref:`User Guide <locally_linear_embedding>`.
Parameters
----------
X : {array-like, sparse matrix, BallTree, KDTree, NearestNeighbors}
Sample data, shape = (n_samples, n_features), in the form of a
numpy array, sparse array, precomputed tree, or NearestNeighbors
object.
n_neighbors : integer
number of neighbors to consider for each point.
n_components : integer
number of coordinates for the manifold.
reg : float
regularization constant, multiplies the trace of the local covariance
matrix of the distances.
eigen_solver : string, {'auto', 'arpack', 'dense'}
auto : algorithm will attempt to choose the best method for input data
arpack : use arnoldi iteration in shift-invert mode.
For this method, M may be a dense matrix, sparse matrix,
or general linear operator.
Warning: ARPACK can be unstable for some problems. It is
best to try several random seeds in order to check results.
dense : use standard dense matrix operations for the eigenvalue
decomposition. For this method, M must be an array
or matrix type. This method should be avoided for
large problems.
tol : float, optional
Tolerance for 'arpack' method
Not used if eigen_solver=='dense'.
max_iter : integer
maximum number of iterations for the arpack solver.
method : {'standard', 'hessian', 'modified', 'ltsa'}
standard : use the standard locally linear embedding algorithm.
see reference [1]_
hessian : use the Hessian eigenmap method. This method requires
n_neighbors > n_components * (1 + (n_components + 1) / 2.
see reference [2]_
modified : use the modified locally linear embedding algorithm.
see reference [3]_
ltsa : use local tangent space alignment algorithm
see reference [4]_
hessian_tol : float, optional
Tolerance for Hessian eigenmapping method.
Only used if method == 'hessian'
modified_tol : float, optional
Tolerance for modified LLE method.
Only used if method == 'modified'
random_state: numpy.RandomState or int, optional
The generator or seed used to determine the starting vector for arpack
iterations. Defaults to numpy.random.
n_jobs : int, optional (default = 1)
The number of parallel jobs to run for neighbors search.
If ``-1``, then the number of jobs is set to the number of CPU cores.
Returns
-------
Y : array-like, shape [n_samples, n_components]
Embedding vectors.
squared_error : float
Reconstruction error for the embedding vectors. Equivalent to
``norm(Y - W Y, 'fro')**2``, where W are the reconstruction weights.
References
----------
.. [1] `Roweis, S. & Saul, L. Nonlinear dimensionality reduction
by locally linear embedding. Science 290:2323 (2000).`
.. [2] `Donoho, D. & Grimes, C. Hessian eigenmaps: Locally
linear embedding techniques for high-dimensional data.
Proc Natl Acad Sci U S A. 100:5591 (2003).`
.. [3] `Zhang, Z. & Wang, J. MLLE: Modified Locally Linear
Embedding Using Multiple Weights.`
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.70.382
.. [4] `Zhang, Z. & Zha, H. Principal manifolds and nonlinear
dimensionality reduction via tangent space alignment.
Journal of Shanghai Univ. 8:406 (2004)`
"""
if eigen_solver not in ('auto', 'arpack', 'dense'):
raise ValueError("unrecognized eigen_solver '%s'" % eigen_solver)
if method not in ('standard', 'hessian', 'modified', 'ltsa'):
raise ValueError("unrecognized method '%s'" % method)
nbrs = NearestNeighbors(n_neighbors=n_neighbors + 1, n_jobs=n_jobs)
nbrs.fit(X)
X = nbrs._fit_X
N, d_in = X.shape
if n_components > d_in:
raise ValueError("output dimension must be less than or equal "
"to input dimension")
if n_neighbors >= N:
raise ValueError("n_neighbors must be less than number of points")
if n_neighbors <= 0:
raise ValueError("n_neighbors must be positive")
M_sparse = (eigen_solver != 'dense')
if method == 'standard':
W = barycenter_kneighbors_graph(
nbrs, n_neighbors=n_neighbors, reg=reg, n_jobs=n_jobs)
# we'll compute M = (I-W)'(I-W)
# depending on the solver, we'll do this differently
if M_sparse:
M = eye(*W.shape, format=W.format) - W
M = (M.T * M).tocsr()
else:
M = (W.T * W - W.T - W).toarray()
M.flat[::M.shape[0] + 1] += 1 # W = W - I = W - I
elif method == 'hessian':
dp = n_components * (n_components + 1) // 2
if n_neighbors <= n_components + dp:
raise ValueError("for method='hessian', n_neighbors must be "
"greater than "
"[n_components * (n_components + 3) / 2]")
neighbors = nbrs.kneighbors(X, n_neighbors=n_neighbors + 1,
return_distance=False)
neighbors = neighbors[:, 1:]
Yi = np.empty((n_neighbors, 1 + n_components + dp), dtype=np.float64)
Yi[:, 0] = 1
M = np.zeros((N, N), dtype=np.float64)
use_svd = (n_neighbors > d_in)
for i in range(N):
Gi = X[neighbors[i]]
Gi -= Gi.mean(0)
# build Hessian estimator
if use_svd:
U = svd(Gi, full_matrices=0)[0]
else:
Ci = np.dot(Gi, Gi.T)
U = eigh(Ci)[1][:, ::-1]
Yi[:, 1:1 + n_components] = U[:, :n_components]
j = 1 + n_components
for k in range(n_components):
Yi[:, j:j + n_components - k] = (U[:, k:k + 1] *
U[:, k:n_components])
j += n_components - k
Q, R = qr(Yi)
w = Q[:, n_components + 1:]
S = w.sum(0)
S[np.where(abs(S) < hessian_tol)] = 1
w /= S
nbrs_x, nbrs_y = np.meshgrid(neighbors[i], neighbors[i])
M[nbrs_x, nbrs_y] += np.dot(w, w.T)
if M_sparse:
M = csr_matrix(M)
elif method == 'modified':
if n_neighbors < n_components:
raise ValueError("modified LLE requires "
"n_neighbors >= n_components")
neighbors = nbrs.kneighbors(X, n_neighbors=n_neighbors + 1,
return_distance=False)
neighbors = neighbors[:, 1:]
# find the eigenvectors and eigenvalues of each local covariance
# matrix. We want V[i] to be a [n_neighbors x n_neighbors] matrix,
# where the columns are eigenvectors
V = np.zeros((N, n_neighbors, n_neighbors))
nev = min(d_in, n_neighbors)
evals = np.zeros([N, nev])
# choose the most efficient way to find the eigenvectors
use_svd = (n_neighbors > d_in)
if use_svd:
for i in range(N):
X_nbrs = X[neighbors[i]] - X[i]
V[i], evals[i], _ = svd(X_nbrs,
full_matrices=True)
evals **= 2
else:
for i in range(N):
X_nbrs = X[neighbors[i]] - X[i]
C_nbrs = np.dot(X_nbrs, X_nbrs.T)
evi, vi = eigh(C_nbrs)
evals[i] = evi[::-1]
V[i] = vi[:, ::-1]
# find regularized weights: this is like normal LLE.
# because we've already computed the SVD of each covariance matrix,
# it's faster to use this rather than np.linalg.solve
reg = 1E-3 * evals.sum(1)
tmp = np.dot(V.transpose(0, 2, 1), np.ones(n_neighbors))
tmp[:, :nev] /= evals + reg[:, None]
tmp[:, nev:] /= reg[:, None]
w_reg = np.zeros((N, n_neighbors))
for i in range(N):
w_reg[i] = np.dot(V[i], tmp[i])
w_reg /= w_reg.sum(1)[:, None]
# calculate eta: the median of the ratio of small to large eigenvalues
# across the points. This is used to determine s_i, below
rho = evals[:, n_components:].sum(1) / evals[:, :n_components].sum(1)
eta = np.median(rho)
# find s_i, the size of the "almost null space" for each point:
# this is the size of the largest set of eigenvalues
# such that Sum[v; v in set]/Sum[v; v not in set] < eta
s_range = np.zeros(N, dtype=int)
evals_cumsum = np.cumsum(evals, 1)
eta_range = evals_cumsum[:, -1:] / evals_cumsum[:, :-1] - 1
for i in range(N):
s_range[i] = np.searchsorted(eta_range[i, ::-1], eta)
s_range += n_neighbors - nev # number of zero eigenvalues
# Now calculate M.
# This is the [N x N] matrix whose null space is the desired embedding
M = np.zeros((N, N), dtype=np.float64)
for i in range(N):
s_i = s_range[i]
# select bottom s_i eigenvectors and calculate alpha
Vi = V[i, :, n_neighbors - s_i:]
alpha_i = np.linalg.norm(Vi.sum(0)) / np.sqrt(s_i)
# compute Householder matrix which satisfies
# Hi*Vi.T*ones(n_neighbors) = alpha_i*ones(s)
# using prescription from paper
h = alpha_i * np.ones(s_i) - np.dot(Vi.T, np.ones(n_neighbors))
norm_h = np.linalg.norm(h)
if norm_h < modified_tol:
h *= 0
else:
h /= norm_h
# Householder matrix is
# >> Hi = np.identity(s_i) - 2*np.outer(h,h)
# Then the weight matrix is
# >> Wi = np.dot(Vi,Hi) + (1-alpha_i) * w_reg[i,:,None]
# We do this much more efficiently:
Wi = (Vi - 2 * np.outer(np.dot(Vi, h), h) +
(1 - alpha_i) * w_reg[i, :, None])
# Update M as follows:
# >> W_hat = np.zeros( (N,s_i) )
# >> W_hat[neighbors[i],:] = Wi
# >> W_hat[i] -= 1
# >> M += np.dot(W_hat,W_hat.T)
# We can do this much more efficiently:
nbrs_x, nbrs_y = np.meshgrid(neighbors[i], neighbors[i])
M[nbrs_x, nbrs_y] += np.dot(Wi, Wi.T)
Wi_sum1 = Wi.sum(1)
M[i, neighbors[i]] -= Wi_sum1
M[neighbors[i], i] -= Wi_sum1
M[i, i] += s_i
if M_sparse:
M = csr_matrix(M)
elif method == 'ltsa':
neighbors = nbrs.kneighbors(X, n_neighbors=n_neighbors + 1,
return_distance=False)
neighbors = neighbors[:, 1:]
M = np.zeros((N, N))
use_svd = (n_neighbors > d_in)
for i in range(N):
Xi = X[neighbors[i]]
Xi -= Xi.mean(0)
# compute n_components largest eigenvalues of Xi * Xi^T
if use_svd:
v = svd(Xi, full_matrices=True)[0]
else:
Ci = np.dot(Xi, Xi.T)
v = eigh(Ci)[1][:, ::-1]
Gi = np.zeros((n_neighbors, n_components + 1))
Gi[:, 1:] = v[:, :n_components]
Gi[:, 0] = 1. / np.sqrt(n_neighbors)
GiGiT = np.dot(Gi, Gi.T)
nbrs_x, nbrs_y = np.meshgrid(neighbors[i], neighbors[i])
M[nbrs_x, nbrs_y] -= GiGiT
M[neighbors[i], neighbors[i]] += 1
return null_space(M, n_components, k_skip=1, eigen_solver=eigen_solver,
tol=tol, max_iter=max_iter, random_state=random_state)
class LocallyLinearEmbedding(BaseEstimator, TransformerMixin):
"""Locally Linear Embedding
Read more in the :ref:`User Guide <locally_linear_embedding>`.
Parameters
----------
n_neighbors : integer
number of neighbors to consider for each point.
n_components : integer
number of coordinates for the manifold
reg : float
regularization constant, multiplies the trace of the local covariance
matrix of the distances.
eigen_solver : string, {'auto', 'arpack', 'dense'}
auto : algorithm will attempt to choose the best method for input data
arpack : use arnoldi iteration in shift-invert mode.
For this method, M may be a dense matrix, sparse matrix,
or general linear operator.
Warning: ARPACK can be unstable for some problems. It is
best to try several random seeds in order to check results.
dense : use standard dense matrix operations for the eigenvalue
decomposition. For this method, M must be an array
or matrix type. This method should be avoided for
large problems.
tol : float, optional
Tolerance for 'arpack' method
Not used if eigen_solver=='dense'.
max_iter : integer
maximum number of iterations for the arpack solver.
Not used if eigen_solver=='dense'.
method : string ('standard', 'hessian', 'modified' or 'ltsa')
standard : use the standard locally linear embedding algorithm. see
reference [1]
hessian : use the Hessian eigenmap method. This method requires
``n_neighbors > n_components * (1 + (n_components + 1) / 2``
see reference [2]
modified : use the modified locally linear embedding algorithm.
see reference [3]
ltsa : use local tangent space alignment algorithm
see reference [4]
hessian_tol : float, optional
Tolerance for Hessian eigenmapping method.
Only used if ``method == 'hessian'``
modified_tol : float, optional
Tolerance for modified LLE method.
Only used if ``method == 'modified'``
neighbors_algorithm : string ['auto'|'brute'|'kd_tree'|'ball_tree']
algorithm to use for nearest neighbors search,
passed to neighbors.NearestNeighbors instance
random_state: numpy.RandomState or int, optional
The generator or seed used to determine the starting vector for arpack
iterations. Defaults to numpy.random.
n_jobs : int, optional (default = 1)
The number of parallel jobs to run.
If ``-1``, then the number of jobs is set to the number of CPU cores.
Attributes
----------
embedding_vectors_ : array-like, shape [n_components, n_samples]
Stores the embedding vectors
reconstruction_error_ : float
Reconstruction error associated with `embedding_vectors_`
nbrs_ : NearestNeighbors object
Stores nearest neighbors instance, including BallTree or KDtree
if applicable.
References
----------
.. [1] `Roweis, S. & Saul, L. Nonlinear dimensionality reduction
by locally linear embedding. Science 290:2323 (2000).`
.. [2] `Donoho, D. & Grimes, C. Hessian eigenmaps: Locally
linear embedding techniques for high-dimensional data.
Proc Natl Acad Sci U S A. 100:5591 (2003).`
.. [3] `Zhang, Z. & Wang, J. MLLE: Modified Locally Linear
Embedding Using Multiple Weights.`
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.70.382
.. [4] `Zhang, Z. & Zha, H. Principal manifolds and nonlinear
dimensionality reduction via tangent space alignment.
Journal of Shanghai Univ. 8:406 (2004)`
"""
def __init__(self, n_neighbors=5, n_components=2, reg=1E-3,
eigen_solver='auto', tol=1E-6, max_iter=100,
method='standard', hessian_tol=1E-4, modified_tol=1E-12,
neighbors_algorithm='auto', random_state=None, n_jobs=1):
self.n_neighbors = n_neighbors
self.n_components = n_components
self.reg = reg
self.eigen_solver = eigen_solver
self.tol = tol
self.max_iter = max_iter
self.method = method
self.hessian_tol = hessian_tol
self.modified_tol = modified_tol
self.random_state = random_state
self.neighbors_algorithm = neighbors_algorithm
self.n_jobs = n_jobs
def _fit_transform(self, X):
self.nbrs_ = NearestNeighbors(self.n_neighbors,
algorithm=self.neighbors_algorithm,
n_jobs=self.n_jobs)
random_state = check_random_state(self.random_state)
X = check_array(X)
self.nbrs_.fit(X)
self.embedding_, self.reconstruction_error_ = \
locally_linear_embedding(
self.nbrs_, self.n_neighbors, self.n_components,
eigen_solver=self.eigen_solver, tol=self.tol,
max_iter=self.max_iter, method=self.method,
hessian_tol=self.hessian_tol, modified_tol=self.modified_tol,
random_state=random_state, reg=self.reg, n_jobs=self.n_jobs)
def fit(self, X, y=None):
"""Compute the embedding vectors for data X
Parameters
----------
X : array-like of shape [n_samples, n_features]
training set.
Returns
-------
self : returns an instance of self.
"""
self._fit_transform(X)
return self
def fit_transform(self, X, y=None):
"""Compute the embedding vectors for data X and transform X.
Parameters
----------
X : array-like of shape [n_samples, n_features]
training set.
Returns
-------
X_new: array-like, shape (n_samples, n_components)
"""
self._fit_transform(X)
return self.embedding_
def transform(self, X):
"""
Transform new points into embedding space.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
X_new : array, shape = [n_samples, n_components]
Notes
-----
Because of scaling performed by this method, it is discouraged to use
it together with methods that are not scale-invariant (like SVMs)
"""
check_is_fitted(self, "nbrs_")
X = check_array(X)
ind = self.nbrs_.kneighbors(X, n_neighbors=self.n_neighbors,
return_distance=False)
weights = barycenter_weights(X, self.nbrs_._fit_X[ind],
reg=self.reg)
X_new = np.empty((X.shape[0], self.n_components))
for i in range(X.shape[0]):
X_new[i] = np.dot(self.embedding_[ind[i]].T, weights[i])
return X_new
| bsd-3-clause |
mwiebe/numpy | numpy/lib/recfunctions.py | 148 | 35012 | """
Collection of utilities to manipulate structured arrays.
Most of these functions were initially implemented by John Hunter for
matplotlib. They have been rewritten and extended for convenience.
"""
from __future__ import division, absolute_import, print_function
import sys
import itertools
import numpy as np
import numpy.ma as ma
from numpy import ndarray, recarray
from numpy.ma import MaskedArray
from numpy.ma.mrecords import MaskedRecords
from numpy.lib._iotools import _is_string_like
from numpy.compat import basestring
if sys.version_info[0] < 3:
from future_builtins import zip
_check_fill_value = np.ma.core._check_fill_value
__all__ = [
'append_fields', 'drop_fields', 'find_duplicates',
'get_fieldstructure', 'join_by', 'merge_arrays',
'rec_append_fields', 'rec_drop_fields', 'rec_join',
'recursive_fill_fields', 'rename_fields', 'stack_arrays',
]
def recursive_fill_fields(input, output):
"""
Fills fields from output with fields from input,
with support for nested structures.
Parameters
----------
input : ndarray
Input array.
output : ndarray
Output array.
Notes
-----
* `output` should be at least the same size as `input`
Examples
--------
>>> from numpy.lib import recfunctions as rfn
>>> a = np.array([(1, 10.), (2, 20.)], dtype=[('A', int), ('B', float)])
>>> b = np.zeros((3,), dtype=a.dtype)
>>> rfn.recursive_fill_fields(a, b)
array([(1, 10.0), (2, 20.0), (0, 0.0)],
dtype=[('A', '<i4'), ('B', '<f8')])
"""
newdtype = output.dtype
for field in newdtype.names:
try:
current = input[field]
except ValueError:
continue
if current.dtype.names:
recursive_fill_fields(current, output[field])
else:
output[field][:len(current)] = current
return output
def get_names(adtype):
"""
Returns the field names of the input datatype as a tuple.
Parameters
----------
adtype : dtype
Input datatype
Examples
--------
>>> from numpy.lib import recfunctions as rfn
>>> rfn.get_names(np.empty((1,), dtype=int)) is None
True
>>> rfn.get_names(np.empty((1,), dtype=[('A',int), ('B', float)]))
('A', 'B')
>>> adtype = np.dtype([('a', int), ('b', [('ba', int), ('bb', int)])])
>>> rfn.get_names(adtype)
('a', ('b', ('ba', 'bb')))
"""
listnames = []
names = adtype.names
for name in names:
current = adtype[name]
if current.names:
listnames.append((name, tuple(get_names(current))))
else:
listnames.append(name)
return tuple(listnames) or None
def get_names_flat(adtype):
"""
Returns the field names of the input datatype as a tuple. Nested structure
are flattend beforehand.
Parameters
----------
adtype : dtype
Input datatype
Examples
--------
>>> from numpy.lib import recfunctions as rfn
>>> rfn.get_names_flat(np.empty((1,), dtype=int)) is None
True
>>> rfn.get_names_flat(np.empty((1,), dtype=[('A',int), ('B', float)]))
('A', 'B')
>>> adtype = np.dtype([('a', int), ('b', [('ba', int), ('bb', int)])])
>>> rfn.get_names_flat(adtype)
('a', 'b', 'ba', 'bb')
"""
listnames = []
names = adtype.names
for name in names:
listnames.append(name)
current = adtype[name]
if current.names:
listnames.extend(get_names_flat(current))
return tuple(listnames) or None
def flatten_descr(ndtype):
"""
Flatten a structured data-type description.
Examples
--------
>>> from numpy.lib import recfunctions as rfn
>>> ndtype = np.dtype([('a', '<i4'), ('b', [('ba', '<f8'), ('bb', '<i4')])])
>>> rfn.flatten_descr(ndtype)
(('a', dtype('int32')), ('ba', dtype('float64')), ('bb', dtype('int32')))
"""
names = ndtype.names
if names is None:
return ndtype.descr
else:
descr = []
for field in names:
(typ, _) = ndtype.fields[field]
if typ.names:
descr.extend(flatten_descr(typ))
else:
descr.append((field, typ))
return tuple(descr)
def zip_descr(seqarrays, flatten=False):
"""
Combine the dtype description of a series of arrays.
Parameters
----------
seqarrays : sequence of arrays
Sequence of arrays
flatten : {boolean}, optional
Whether to collapse nested descriptions.
"""
newdtype = []
if flatten:
for a in seqarrays:
newdtype.extend(flatten_descr(a.dtype))
else:
for a in seqarrays:
current = a.dtype
names = current.names or ()
if len(names) > 1:
newdtype.append(('', current.descr))
else:
newdtype.extend(current.descr)
return np.dtype(newdtype).descr
def get_fieldstructure(adtype, lastname=None, parents=None,):
"""
Returns a dictionary with fields indexing lists of their parent fields.
This function is used to simplify access to fields nested in other fields.
Parameters
----------
adtype : np.dtype
Input datatype
lastname : optional
Last processed field name (used internally during recursion).
parents : dictionary
Dictionary of parent fields (used interbally during recursion).
Examples
--------
>>> from numpy.lib import recfunctions as rfn
>>> ndtype = np.dtype([('A', int),
... ('B', [('BA', int),
... ('BB', [('BBA', int), ('BBB', int)])])])
>>> rfn.get_fieldstructure(ndtype)
... # XXX: possible regression, order of BBA and BBB is swapped
{'A': [], 'B': [], 'BA': ['B'], 'BB': ['B'], 'BBA': ['B', 'BB'], 'BBB': ['B', 'BB']}
"""
if parents is None:
parents = {}
names = adtype.names
for name in names:
current = adtype[name]
if current.names:
if lastname:
parents[name] = [lastname, ]
else:
parents[name] = []
parents.update(get_fieldstructure(current, name, parents))
else:
lastparent = [_ for _ in (parents.get(lastname, []) or [])]
if lastparent:
lastparent.append(lastname)
elif lastname:
lastparent = [lastname, ]
parents[name] = lastparent or []
return parents or None
def _izip_fields_flat(iterable):
"""
Returns an iterator of concatenated fields from a sequence of arrays,
collapsing any nested structure.
"""
for element in iterable:
if isinstance(element, np.void):
for f in _izip_fields_flat(tuple(element)):
yield f
else:
yield element
def _izip_fields(iterable):
"""
Returns an iterator of concatenated fields from a sequence of arrays.
"""
for element in iterable:
if (hasattr(element, '__iter__') and
not isinstance(element, basestring)):
for f in _izip_fields(element):
yield f
elif isinstance(element, np.void) and len(tuple(element)) == 1:
for f in _izip_fields(element):
yield f
else:
yield element
def izip_records(seqarrays, fill_value=None, flatten=True):
"""
Returns an iterator of concatenated items from a sequence of arrays.
Parameters
----------
seqarrays : sequence of arrays
Sequence of arrays.
fill_value : {None, integer}
Value used to pad shorter iterables.
flatten : {True, False},
Whether to
"""
# OK, that's a complete ripoff from Python2.6 itertools.izip_longest
def sentinel(counter=([fill_value] * (len(seqarrays) - 1)).pop):
"Yields the fill_value or raises IndexError"
yield counter()
#
fillers = itertools.repeat(fill_value)
iters = [itertools.chain(it, sentinel(), fillers) for it in seqarrays]
# Should we flatten the items, or just use a nested approach
if flatten:
zipfunc = _izip_fields_flat
else:
zipfunc = _izip_fields
#
try:
for tup in zip(*iters):
yield tuple(zipfunc(tup))
except IndexError:
pass
def _fix_output(output, usemask=True, asrecarray=False):
"""
Private function: return a recarray, a ndarray, a MaskedArray
or a MaskedRecords depending on the input parameters
"""
if not isinstance(output, MaskedArray):
usemask = False
if usemask:
if asrecarray:
output = output.view(MaskedRecords)
else:
output = ma.filled(output)
if asrecarray:
output = output.view(recarray)
return output
def _fix_defaults(output, defaults=None):
"""
Update the fill_value and masked data of `output`
from the default given in a dictionary defaults.
"""
names = output.dtype.names
(data, mask, fill_value) = (output.data, output.mask, output.fill_value)
for (k, v) in (defaults or {}).items():
if k in names:
fill_value[k] = v
data[k][mask[k]] = v
return output
def merge_arrays(seqarrays, fill_value=-1, flatten=False,
usemask=False, asrecarray=False):
"""
Merge arrays field by field.
Parameters
----------
seqarrays : sequence of ndarrays
Sequence of arrays
fill_value : {float}, optional
Filling value used to pad missing data on the shorter arrays.
flatten : {False, True}, optional
Whether to collapse nested fields.
usemask : {False, True}, optional
Whether to return a masked array or not.
asrecarray : {False, True}, optional
Whether to return a recarray (MaskedRecords) or not.
Examples
--------
>>> from numpy.lib import recfunctions as rfn
>>> rfn.merge_arrays((np.array([1, 2]), np.array([10., 20., 30.])))
masked_array(data = [(1, 10.0) (2, 20.0) (--, 30.0)],
mask = [(False, False) (False, False) (True, False)],
fill_value = (999999, 1e+20),
dtype = [('f0', '<i4'), ('f1', '<f8')])
>>> rfn.merge_arrays((np.array([1, 2]), np.array([10., 20., 30.])),
... usemask=False)
array([(1, 10.0), (2, 20.0), (-1, 30.0)],
dtype=[('f0', '<i4'), ('f1', '<f8')])
>>> rfn.merge_arrays((np.array([1, 2]).view([('a', int)]),
... np.array([10., 20., 30.])),
... usemask=False, asrecarray=True)
rec.array([(1, 10.0), (2, 20.0), (-1, 30.0)],
dtype=[('a', '<i4'), ('f1', '<f8')])
Notes
-----
* Without a mask, the missing value will be filled with something,
* depending on what its corresponding type:
-1 for integers
-1.0 for floating point numbers
'-' for characters
'-1' for strings
True for boolean values
* XXX: I just obtained these values empirically
"""
# Only one item in the input sequence ?
if (len(seqarrays) == 1):
seqarrays = np.asanyarray(seqarrays[0])
# Do we have a single ndarray as input ?
if isinstance(seqarrays, (ndarray, np.void)):
seqdtype = seqarrays.dtype
if (not flatten) or \
(zip_descr((seqarrays,), flatten=True) == seqdtype.descr):
# Minimal processing needed: just make sure everythng's a-ok
seqarrays = seqarrays.ravel()
# Make sure we have named fields
if not seqdtype.names:
seqdtype = [('', seqdtype)]
# Find what type of array we must return
if usemask:
if asrecarray:
seqtype = MaskedRecords
else:
seqtype = MaskedArray
elif asrecarray:
seqtype = recarray
else:
seqtype = ndarray
return seqarrays.view(dtype=seqdtype, type=seqtype)
else:
seqarrays = (seqarrays,)
else:
# Make sure we have arrays in the input sequence
seqarrays = [np.asanyarray(_m) for _m in seqarrays]
# Find the sizes of the inputs and their maximum
sizes = tuple(a.size for a in seqarrays)
maxlength = max(sizes)
# Get the dtype of the output (flattening if needed)
newdtype = zip_descr(seqarrays, flatten=flatten)
# Initialize the sequences for data and mask
seqdata = []
seqmask = []
# If we expect some kind of MaskedArray, make a special loop.
if usemask:
for (a, n) in zip(seqarrays, sizes):
nbmissing = (maxlength - n)
# Get the data and mask
data = a.ravel().__array__()
mask = ma.getmaskarray(a).ravel()
# Get the filling value (if needed)
if nbmissing:
fval = _check_fill_value(fill_value, a.dtype)
if isinstance(fval, (ndarray, np.void)):
if len(fval.dtype) == 1:
fval = fval.item()[0]
fmsk = True
else:
fval = np.array(fval, dtype=a.dtype, ndmin=1)
fmsk = np.ones((1,), dtype=mask.dtype)
else:
fval = None
fmsk = True
# Store an iterator padding the input to the expected length
seqdata.append(itertools.chain(data, [fval] * nbmissing))
seqmask.append(itertools.chain(mask, [fmsk] * nbmissing))
# Create an iterator for the data
data = tuple(izip_records(seqdata, flatten=flatten))
output = ma.array(np.fromiter(data, dtype=newdtype, count=maxlength),
mask=list(izip_records(seqmask, flatten=flatten)))
if asrecarray:
output = output.view(MaskedRecords)
else:
# Same as before, without the mask we don't need...
for (a, n) in zip(seqarrays, sizes):
nbmissing = (maxlength - n)
data = a.ravel().__array__()
if nbmissing:
fval = _check_fill_value(fill_value, a.dtype)
if isinstance(fval, (ndarray, np.void)):
if len(fval.dtype) == 1:
fval = fval.item()[0]
else:
fval = np.array(fval, dtype=a.dtype, ndmin=1)
else:
fval = None
seqdata.append(itertools.chain(data, [fval] * nbmissing))
output = np.fromiter(tuple(izip_records(seqdata, flatten=flatten)),
dtype=newdtype, count=maxlength)
if asrecarray:
output = output.view(recarray)
# And we're done...
return output
def drop_fields(base, drop_names, usemask=True, asrecarray=False):
"""
Return a new array with fields in `drop_names` dropped.
Nested fields are supported.
Parameters
----------
base : array
Input array
drop_names : string or sequence
String or sequence of strings corresponding to the names of the
fields to drop.
usemask : {False, True}, optional
Whether to return a masked array or not.
asrecarray : string or sequence, optional
Whether to return a recarray or a mrecarray (`asrecarray=True`) or
a plain ndarray or masked array with flexible dtype. The default
is False.
Examples
--------
>>> from numpy.lib import recfunctions as rfn
>>> a = np.array([(1, (2, 3.0)), (4, (5, 6.0))],
... dtype=[('a', int), ('b', [('ba', float), ('bb', int)])])
>>> rfn.drop_fields(a, 'a')
array([((2.0, 3),), ((5.0, 6),)],
dtype=[('b', [('ba', '<f8'), ('bb', '<i4')])])
>>> rfn.drop_fields(a, 'ba')
array([(1, (3,)), (4, (6,))],
dtype=[('a', '<i4'), ('b', [('bb', '<i4')])])
>>> rfn.drop_fields(a, ['ba', 'bb'])
array([(1,), (4,)],
dtype=[('a', '<i4')])
"""
if _is_string_like(drop_names):
drop_names = [drop_names, ]
else:
drop_names = set(drop_names)
def _drop_descr(ndtype, drop_names):
names = ndtype.names
newdtype = []
for name in names:
current = ndtype[name]
if name in drop_names:
continue
if current.names:
descr = _drop_descr(current, drop_names)
if descr:
newdtype.append((name, descr))
else:
newdtype.append((name, current))
return newdtype
newdtype = _drop_descr(base.dtype, drop_names)
if not newdtype:
return None
output = np.empty(base.shape, dtype=newdtype)
output = recursive_fill_fields(base, output)
return _fix_output(output, usemask=usemask, asrecarray=asrecarray)
def rec_drop_fields(base, drop_names):
"""
Returns a new numpy.recarray with fields in `drop_names` dropped.
"""
return drop_fields(base, drop_names, usemask=False, asrecarray=True)
def rename_fields(base, namemapper):
"""
Rename the fields from a flexible-datatype ndarray or recarray.
Nested fields are supported.
Parameters
----------
base : ndarray
Input array whose fields must be modified.
namemapper : dictionary
Dictionary mapping old field names to their new version.
Examples
--------
>>> from numpy.lib import recfunctions as rfn
>>> a = np.array([(1, (2, [3.0, 30.])), (4, (5, [6.0, 60.]))],
... dtype=[('a', int),('b', [('ba', float), ('bb', (float, 2))])])
>>> rfn.rename_fields(a, {'a':'A', 'bb':'BB'})
array([(1, (2.0, [3.0, 30.0])), (4, (5.0, [6.0, 60.0]))],
dtype=[('A', '<i4'), ('b', [('ba', '<f8'), ('BB', '<f8', 2)])])
"""
def _recursive_rename_fields(ndtype, namemapper):
newdtype = []
for name in ndtype.names:
newname = namemapper.get(name, name)
current = ndtype[name]
if current.names:
newdtype.append(
(newname, _recursive_rename_fields(current, namemapper))
)
else:
newdtype.append((newname, current))
return newdtype
newdtype = _recursive_rename_fields(base.dtype, namemapper)
return base.view(newdtype)
def append_fields(base, names, data, dtypes=None,
fill_value=-1, usemask=True, asrecarray=False):
"""
Add new fields to an existing array.
The names of the fields are given with the `names` arguments,
the corresponding values with the `data` arguments.
If a single field is appended, `names`, `data` and `dtypes` do not have
to be lists but just values.
Parameters
----------
base : array
Input array to extend.
names : string, sequence
String or sequence of strings corresponding to the names
of the new fields.
data : array or sequence of arrays
Array or sequence of arrays storing the fields to add to the base.
dtypes : sequence of datatypes, optional
Datatype or sequence of datatypes.
If None, the datatypes are estimated from the `data`.
fill_value : {float}, optional
Filling value used to pad missing data on the shorter arrays.
usemask : {False, True}, optional
Whether to return a masked array or not.
asrecarray : {False, True}, optional
Whether to return a recarray (MaskedRecords) or not.
"""
# Check the names
if isinstance(names, (tuple, list)):
if len(names) != len(data):
msg = "The number of arrays does not match the number of names"
raise ValueError(msg)
elif isinstance(names, basestring):
names = [names, ]
data = [data, ]
#
if dtypes is None:
data = [np.array(a, copy=False, subok=True) for a in data]
data = [a.view([(name, a.dtype)]) for (name, a) in zip(names, data)]
else:
if not isinstance(dtypes, (tuple, list)):
dtypes = [dtypes, ]
if len(data) != len(dtypes):
if len(dtypes) == 1:
dtypes = dtypes * len(data)
else:
msg = "The dtypes argument must be None, a dtype, or a list."
raise ValueError(msg)
data = [np.array(a, copy=False, subok=True, dtype=d).view([(n, d)])
for (a, n, d) in zip(data, names, dtypes)]
#
base = merge_arrays(base, usemask=usemask, fill_value=fill_value)
if len(data) > 1:
data = merge_arrays(data, flatten=True, usemask=usemask,
fill_value=fill_value)
else:
data = data.pop()
#
output = ma.masked_all(max(len(base), len(data)),
dtype=base.dtype.descr + data.dtype.descr)
output = recursive_fill_fields(base, output)
output = recursive_fill_fields(data, output)
#
return _fix_output(output, usemask=usemask, asrecarray=asrecarray)
def rec_append_fields(base, names, data, dtypes=None):
"""
Add new fields to an existing array.
The names of the fields are given with the `names` arguments,
the corresponding values with the `data` arguments.
If a single field is appended, `names`, `data` and `dtypes` do not have
to be lists but just values.
Parameters
----------
base : array
Input array to extend.
names : string, sequence
String or sequence of strings corresponding to the names
of the new fields.
data : array or sequence of arrays
Array or sequence of arrays storing the fields to add to the base.
dtypes : sequence of datatypes, optional
Datatype or sequence of datatypes.
If None, the datatypes are estimated from the `data`.
See Also
--------
append_fields
Returns
-------
appended_array : np.recarray
"""
return append_fields(base, names, data=data, dtypes=dtypes,
asrecarray=True, usemask=False)
def stack_arrays(arrays, defaults=None, usemask=True, asrecarray=False,
autoconvert=False):
"""
Superposes arrays fields by fields
Parameters
----------
arrays : array or sequence
Sequence of input arrays.
defaults : dictionary, optional
Dictionary mapping field names to the corresponding default values.
usemask : {True, False}, optional
Whether to return a MaskedArray (or MaskedRecords is
`asrecarray==True`) or a ndarray.
asrecarray : {False, True}, optional
Whether to return a recarray (or MaskedRecords if `usemask==True`)
or just a flexible-type ndarray.
autoconvert : {False, True}, optional
Whether automatically cast the type of the field to the maximum.
Examples
--------
>>> from numpy.lib import recfunctions as rfn
>>> x = np.array([1, 2,])
>>> rfn.stack_arrays(x) is x
True
>>> z = np.array([('A', 1), ('B', 2)], dtype=[('A', '|S3'), ('B', float)])
>>> zz = np.array([('a', 10., 100.), ('b', 20., 200.), ('c', 30., 300.)],
... dtype=[('A', '|S3'), ('B', float), ('C', float)])
>>> test = rfn.stack_arrays((z,zz))
>>> test
masked_array(data = [('A', 1.0, --) ('B', 2.0, --) ('a', 10.0, 100.0) ('b', 20.0, 200.0)
('c', 30.0, 300.0)],
mask = [(False, False, True) (False, False, True) (False, False, False)
(False, False, False) (False, False, False)],
fill_value = ('N/A', 1e+20, 1e+20),
dtype = [('A', '|S3'), ('B', '<f8'), ('C', '<f8')])
"""
if isinstance(arrays, ndarray):
return arrays
elif len(arrays) == 1:
return arrays[0]
seqarrays = [np.asanyarray(a).ravel() for a in arrays]
nrecords = [len(a) for a in seqarrays]
ndtype = [a.dtype for a in seqarrays]
fldnames = [d.names for d in ndtype]
#
dtype_l = ndtype[0]
newdescr = dtype_l.descr
names = [_[0] for _ in newdescr]
for dtype_n in ndtype[1:]:
for descr in dtype_n.descr:
name = descr[0] or ''
if name not in names:
newdescr.append(descr)
names.append(name)
else:
nameidx = names.index(name)
current_descr = newdescr[nameidx]
if autoconvert:
if np.dtype(descr[1]) > np.dtype(current_descr[-1]):
current_descr = list(current_descr)
current_descr[-1] = descr[1]
newdescr[nameidx] = tuple(current_descr)
elif descr[1] != current_descr[-1]:
raise TypeError("Incompatible type '%s' <> '%s'" %
(dict(newdescr)[name], descr[1]))
# Only one field: use concatenate
if len(newdescr) == 1:
output = ma.concatenate(seqarrays)
else:
#
output = ma.masked_all((np.sum(nrecords),), newdescr)
offset = np.cumsum(np.r_[0, nrecords])
seen = []
for (a, n, i, j) in zip(seqarrays, fldnames, offset[:-1], offset[1:]):
names = a.dtype.names
if names is None:
output['f%i' % len(seen)][i:j] = a
else:
for name in n:
output[name][i:j] = a[name]
if name not in seen:
seen.append(name)
#
return _fix_output(_fix_defaults(output, defaults),
usemask=usemask, asrecarray=asrecarray)
def find_duplicates(a, key=None, ignoremask=True, return_index=False):
"""
Find the duplicates in a structured array along a given key
Parameters
----------
a : array-like
Input array
key : {string, None}, optional
Name of the fields along which to check the duplicates.
If None, the search is performed by records
ignoremask : {True, False}, optional
Whether masked data should be discarded or considered as duplicates.
return_index : {False, True}, optional
Whether to return the indices of the duplicated values.
Examples
--------
>>> from numpy.lib import recfunctions as rfn
>>> ndtype = [('a', int)]
>>> a = np.ma.array([1, 1, 1, 2, 2, 3, 3],
... mask=[0, 0, 1, 0, 0, 0, 1]).view(ndtype)
>>> rfn.find_duplicates(a, ignoremask=True, return_index=True)
... # XXX: judging by the output, the ignoremask flag has no effect
"""
a = np.asanyarray(a).ravel()
# Get a dictionary of fields
fields = get_fieldstructure(a.dtype)
# Get the sorting data (by selecting the corresponding field)
base = a
if key:
for f in fields[key]:
base = base[f]
base = base[key]
# Get the sorting indices and the sorted data
sortidx = base.argsort()
sortedbase = base[sortidx]
sorteddata = sortedbase.filled()
# Compare the sorting data
flag = (sorteddata[:-1] == sorteddata[1:])
# If masked data must be ignored, set the flag to false where needed
if ignoremask:
sortedmask = sortedbase.recordmask
flag[sortedmask[1:]] = False
flag = np.concatenate(([False], flag))
# We need to take the point on the left as well (else we're missing it)
flag[:-1] = flag[:-1] + flag[1:]
duplicates = a[sortidx][flag]
if return_index:
return (duplicates, sortidx[flag])
else:
return duplicates
def join_by(key, r1, r2, jointype='inner', r1postfix='1', r2postfix='2',
defaults=None, usemask=True, asrecarray=False):
"""
Join arrays `r1` and `r2` on key `key`.
The key should be either a string or a sequence of string corresponding
to the fields used to join the array. An exception is raised if the
`key` field cannot be found in the two input arrays. Neither `r1` nor
`r2` should have any duplicates along `key`: the presence of duplicates
will make the output quite unreliable. Note that duplicates are not
looked for by the algorithm.
Parameters
----------
key : {string, sequence}
A string or a sequence of strings corresponding to the fields used
for comparison.
r1, r2 : arrays
Structured arrays.
jointype : {'inner', 'outer', 'leftouter'}, optional
If 'inner', returns the elements common to both r1 and r2.
If 'outer', returns the common elements as well as the elements of
r1 not in r2 and the elements of not in r2.
If 'leftouter', returns the common elements and the elements of r1
not in r2.
r1postfix : string, optional
String appended to the names of the fields of r1 that are present
in r2 but absent of the key.
r2postfix : string, optional
String appended to the names of the fields of r2 that are present
in r1 but absent of the key.
defaults : {dictionary}, optional
Dictionary mapping field names to the corresponding default values.
usemask : {True, False}, optional
Whether to return a MaskedArray (or MaskedRecords is
`asrecarray==True`) or a ndarray.
asrecarray : {False, True}, optional
Whether to return a recarray (or MaskedRecords if `usemask==True`)
or just a flexible-type ndarray.
Notes
-----
* The output is sorted along the key.
* A temporary array is formed by dropping the fields not in the key for
the two arrays and concatenating the result. This array is then
sorted, and the common entries selected. The output is constructed by
filling the fields with the selected entries. Matching is not
preserved if there are some duplicates...
"""
# Check jointype
if jointype not in ('inner', 'outer', 'leftouter'):
raise ValueError(
"The 'jointype' argument should be in 'inner', "
"'outer' or 'leftouter' (got '%s' instead)" % jointype
)
# If we have a single key, put it in a tuple
if isinstance(key, basestring):
key = (key,)
# Check the keys
for name in key:
if name not in r1.dtype.names:
raise ValueError('r1 does not have key field %s' % name)
if name not in r2.dtype.names:
raise ValueError('r2 does not have key field %s' % name)
# Make sure we work with ravelled arrays
r1 = r1.ravel()
r2 = r2.ravel()
# Fixme: nb2 below is never used. Commenting out for pyflakes.
# (nb1, nb2) = (len(r1), len(r2))
nb1 = len(r1)
(r1names, r2names) = (r1.dtype.names, r2.dtype.names)
# Check the names for collision
if (set.intersection(set(r1names), set(r2names)).difference(key) and
not (r1postfix or r2postfix)):
msg = "r1 and r2 contain common names, r1postfix and r2postfix "
msg += "can't be empty"
raise ValueError(msg)
# Make temporary arrays of just the keys
r1k = drop_fields(r1, [n for n in r1names if n not in key])
r2k = drop_fields(r2, [n for n in r2names if n not in key])
# Concatenate the two arrays for comparison
aux = ma.concatenate((r1k, r2k))
idx_sort = aux.argsort(order=key)
aux = aux[idx_sort]
#
# Get the common keys
flag_in = ma.concatenate(([False], aux[1:] == aux[:-1]))
flag_in[:-1] = flag_in[1:] + flag_in[:-1]
idx_in = idx_sort[flag_in]
idx_1 = idx_in[(idx_in < nb1)]
idx_2 = idx_in[(idx_in >= nb1)] - nb1
(r1cmn, r2cmn) = (len(idx_1), len(idx_2))
if jointype == 'inner':
(r1spc, r2spc) = (0, 0)
elif jointype == 'outer':
idx_out = idx_sort[~flag_in]
idx_1 = np.concatenate((idx_1, idx_out[(idx_out < nb1)]))
idx_2 = np.concatenate((idx_2, idx_out[(idx_out >= nb1)] - nb1))
(r1spc, r2spc) = (len(idx_1) - r1cmn, len(idx_2) - r2cmn)
elif jointype == 'leftouter':
idx_out = idx_sort[~flag_in]
idx_1 = np.concatenate((idx_1, idx_out[(idx_out < nb1)]))
(r1spc, r2spc) = (len(idx_1) - r1cmn, 0)
# Select the entries from each input
(s1, s2) = (r1[idx_1], r2[idx_2])
#
# Build the new description of the output array .......
# Start with the key fields
ndtype = [list(_) for _ in r1k.dtype.descr]
# Add the other fields
ndtype.extend(list(_) for _ in r1.dtype.descr if _[0] not in key)
# Find the new list of names (it may be different from r1names)
names = list(_[0] for _ in ndtype)
for desc in r2.dtype.descr:
desc = list(desc)
name = desc[0]
# Have we seen the current name already ?
if name in names:
nameidx = ndtype.index(desc)
current = ndtype[nameidx]
# The current field is part of the key: take the largest dtype
if name in key:
current[-1] = max(desc[1], current[-1])
# The current field is not part of the key: add the suffixes
else:
current[0] += r1postfix
desc[0] += r2postfix
ndtype.insert(nameidx + 1, desc)
#... we haven't: just add the description to the current list
else:
names.extend(desc[0])
ndtype.append(desc)
# Revert the elements to tuples
ndtype = [tuple(_) for _ in ndtype]
# Find the largest nb of common fields :
# r1cmn and r2cmn should be equal, but...
cmn = max(r1cmn, r2cmn)
# Construct an empty array
output = ma.masked_all((cmn + r1spc + r2spc,), dtype=ndtype)
names = output.dtype.names
for f in r1names:
selected = s1[f]
if f not in names or (f in r2names and not r2postfix and f not in key):
f += r1postfix
current = output[f]
current[:r1cmn] = selected[:r1cmn]
if jointype in ('outer', 'leftouter'):
current[cmn:cmn + r1spc] = selected[r1cmn:]
for f in r2names:
selected = s2[f]
if f not in names or (f in r1names and not r1postfix and f not in key):
f += r2postfix
current = output[f]
current[:r2cmn] = selected[:r2cmn]
if (jointype == 'outer') and r2spc:
current[-r2spc:] = selected[r2cmn:]
# Sort and finalize the output
output.sort(order=key)
kwargs = dict(usemask=usemask, asrecarray=asrecarray)
return _fix_output(_fix_defaults(output, defaults), **kwargs)
def rec_join(key, r1, r2, jointype='inner', r1postfix='1', r2postfix='2',
defaults=None):
"""
Join arrays `r1` and `r2` on keys.
Alternative to join_by, that always returns a np.recarray.
See Also
--------
join_by : equivalent function
"""
kwargs = dict(jointype=jointype, r1postfix=r1postfix, r2postfix=r2postfix,
defaults=defaults, usemask=False, asrecarray=True)
return join_by(key, r1, r2, **kwargs)
| bsd-3-clause |
LiaoPan/scikit-learn | examples/plot_digits_pipe.py | 250 | 1809 | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
=========================================================
Pipelining: chaining a PCA and a logistic regression
=========================================================
The PCA does an unsupervised dimensionality reduction, while the logistic
regression does the prediction.
We use a GridSearchCV to set the dimensionality of the PCA
"""
print(__doc__)
# Code source: Gaël Varoquaux
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model, decomposition, datasets
from sklearn.pipeline import Pipeline
from sklearn.grid_search import GridSearchCV
logistic = linear_model.LogisticRegression()
pca = decomposition.PCA()
pipe = Pipeline(steps=[('pca', pca), ('logistic', logistic)])
digits = datasets.load_digits()
X_digits = digits.data
y_digits = digits.target
###############################################################################
# Plot the PCA spectrum
pca.fit(X_digits)
plt.figure(1, figsize=(4, 3))
plt.clf()
plt.axes([.2, .2, .7, .7])
plt.plot(pca.explained_variance_, linewidth=2)
plt.axis('tight')
plt.xlabel('n_components')
plt.ylabel('explained_variance_')
###############################################################################
# Prediction
n_components = [20, 40, 64]
Cs = np.logspace(-4, 4, 3)
#Parameters of pipelines can be set using ‘__’ separated parameter names:
estimator = GridSearchCV(pipe,
dict(pca__n_components=n_components,
logistic__C=Cs))
estimator.fit(X_digits, y_digits)
plt.axvline(estimator.best_estimator_.named_steps['pca'].n_components,
linestyle=':', label='n_components chosen')
plt.legend(prop=dict(size=12))
plt.show()
| bsd-3-clause |
pythonvietnam/scikit-learn | examples/ensemble/plot_adaboost_multiclass.py | 354 | 4124 | """
=====================================
Multi-class AdaBoosted Decision Trees
=====================================
This example reproduces Figure 1 of Zhu et al [1] and shows how boosting can
improve prediction accuracy on a multi-class problem. The classification
dataset is constructed by taking a ten-dimensional standard normal distribution
and defining three classes separated by nested concentric ten-dimensional
spheres such that roughly equal numbers of samples are in each class (quantiles
of the :math:`\chi^2` distribution).
The performance of the SAMME and SAMME.R [1] algorithms are compared. SAMME.R
uses the probability estimates to update the additive model, while SAMME uses
the classifications only. As the example illustrates, the SAMME.R algorithm
typically converges faster than SAMME, achieving a lower test error with fewer
boosting iterations. The error of each algorithm on the test set after each
boosting iteration is shown on the left, the classification error on the test
set of each tree is shown in the middle, and the boost weight of each tree is
shown on the right. All trees have a weight of one in the SAMME.R algorithm and
therefore are not shown.
.. [1] J. Zhu, H. Zou, S. Rosset, T. Hastie, "Multi-class AdaBoost", 2009.
"""
print(__doc__)
# Author: Noel Dawe <[email protected]>
#
# License: BSD 3 clause
from sklearn.externals.six.moves import zip
import matplotlib.pyplot as plt
from sklearn.datasets import make_gaussian_quantiles
from sklearn.ensemble import AdaBoostClassifier
from sklearn.metrics import accuracy_score
from sklearn.tree import DecisionTreeClassifier
X, y = make_gaussian_quantiles(n_samples=13000, n_features=10,
n_classes=3, random_state=1)
n_split = 3000
X_train, X_test = X[:n_split], X[n_split:]
y_train, y_test = y[:n_split], y[n_split:]
bdt_real = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=2),
n_estimators=600,
learning_rate=1)
bdt_discrete = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=2),
n_estimators=600,
learning_rate=1.5,
algorithm="SAMME")
bdt_real.fit(X_train, y_train)
bdt_discrete.fit(X_train, y_train)
real_test_errors = []
discrete_test_errors = []
for real_test_predict, discrete_train_predict in zip(
bdt_real.staged_predict(X_test), bdt_discrete.staged_predict(X_test)):
real_test_errors.append(
1. - accuracy_score(real_test_predict, y_test))
discrete_test_errors.append(
1. - accuracy_score(discrete_train_predict, y_test))
n_trees_discrete = len(bdt_discrete)
n_trees_real = len(bdt_real)
# Boosting might terminate early, but the following arrays are always
# n_estimators long. We crop them to the actual number of trees here:
discrete_estimator_errors = bdt_discrete.estimator_errors_[:n_trees_discrete]
real_estimator_errors = bdt_real.estimator_errors_[:n_trees_real]
discrete_estimator_weights = bdt_discrete.estimator_weights_[:n_trees_discrete]
plt.figure(figsize=(15, 5))
plt.subplot(131)
plt.plot(range(1, n_trees_discrete + 1),
discrete_test_errors, c='black', label='SAMME')
plt.plot(range(1, n_trees_real + 1),
real_test_errors, c='black',
linestyle='dashed', label='SAMME.R')
plt.legend()
plt.ylim(0.18, 0.62)
plt.ylabel('Test Error')
plt.xlabel('Number of Trees')
plt.subplot(132)
plt.plot(range(1, n_trees_discrete + 1), discrete_estimator_errors,
"b", label='SAMME', alpha=.5)
plt.plot(range(1, n_trees_real + 1), real_estimator_errors,
"r", label='SAMME.R', alpha=.5)
plt.legend()
plt.ylabel('Error')
plt.xlabel('Number of Trees')
plt.ylim((.2,
max(real_estimator_errors.max(),
discrete_estimator_errors.max()) * 1.2))
plt.xlim((-20, len(bdt_discrete) + 20))
plt.subplot(133)
plt.plot(range(1, n_trees_discrete + 1), discrete_estimator_weights,
"b", label='SAMME')
plt.legend()
plt.ylabel('Weight')
plt.xlabel('Number of Trees')
plt.ylim((0, discrete_estimator_weights.max() * 1.2))
plt.xlim((-20, n_trees_discrete + 20))
# prevent overlapping y-axis labels
plt.subplots_adjust(wspace=0.25)
plt.show()
| bsd-3-clause |
matk86/pymatgen | pymatgen/io/abinit/flows.py | 2 | 105937 | # coding: utf-8
# Copyright (c) Pymatgen Development Team.
# Distributed under the terms of the MIT License.
"""
A Flow is a container for Works, and works consist of tasks.
Flows are the final objects that can be dumped directly to a pickle file on disk
Flows are executed using abirun (abipy).
"""
from __future__ import unicode_literals, division, print_function
import os
import sys
import time
import collections
import warnings
import shutil
import copy
import tempfile
import numpy as np
from pprint import pprint
from six.moves import map, StringIO
from tabulate import tabulate
from pydispatch import dispatcher
from collections import OrderedDict
from monty.collections import as_set, dict2namedtuple
from monty.string import list_strings, is_string, make_banner
from monty.operator import operator_from_str
from monty.io import FileLock
from monty.pprint import draw_tree
from monty.termcolor import cprint, colored, cprint_map, get_terminal_size
from monty.inspect import find_top_pyfile
from monty.dev import deprecated
from monty.json import MSONable
from pymatgen.util.serialization import pmg_pickle_load, pmg_pickle_dump, pmg_serialize
from pymatgen.core.units import Memory
from pymatgen.util.io_utils import AtomicFile
from pymatgen.util.plotting import add_fig_kwargs, get_ax_fig_plt
from . import wrappers
from .nodes import Status, Node, NodeError, NodeResults, Dependency, GarbageCollector, check_spectator
from .tasks import ScfTask, DdkTask, DdeTask, TaskManager, FixQueueCriticalError
from .utils import File, Directory, Editor
from .abiinspect import yaml_read_irred_perts
from .works import NodeContainer, Work, BandStructureWork, PhononWork, BecWork, G0W0Work, QptdmWork, DteWork
from .events import EventsParser # autodoc_event_handlers
import logging
logger = logging.getLogger(__name__)
__author__ = "Matteo Giantomassi"
__copyright__ = "Copyright 2013, The Materials Project"
__version__ = "0.1"
__maintainer__ = "Matteo Giantomassi"
__all__ = [
"Flow",
"G0W0WithQptdmFlow",
"bandstructure_flow",
"g0w0_flow",
"phonon_flow",
]
class FlowResults(NodeResults):
JSON_SCHEMA = NodeResults.JSON_SCHEMA.copy()
#JSON_SCHEMA["properties"] = {
# "queries": {"type": "string", "required": True},
#}
@classmethod
def from_node(cls, flow):
"""Initialize an instance from a Work instance."""
new = super(FlowResults, cls).from_node(flow)
# Will put all files found in outdir in GridFs
d = {os.path.basename(f): f for f in flow.outdir.list_filepaths()}
# Add the pickle file.
d["pickle"] = flow.pickle_file if flow.pickle_protocol != 0 else (flow.pickle_file, "t")
new.add_gridfs_files(**d)
return new
class FlowError(NodeError):
"""Base Exception for :class:`Node` methods"""
class Flow(Node, NodeContainer, MSONable):
"""
This object is a container of work. Its main task is managing the
possible inter-dependencies among the work and the creation of
dynamic workflows that are generated by callbacks registered by the user.
.. attributes::
creation_date: String with the creation_date
pickle_protocol: Protocol for Pickle database (default: -1 i.e. latest protocol)
Important methods for constructing flows:
.. methods::
register_work: register (add) a work to the flow
resister_task: register a work that contains only this task returns the work
allocate: propagate the workdir and manager of the flow to all the registered tasks
build:
build_and_pickle_dump:
"""
VERSION = "0.1"
PICKLE_FNAME = "__AbinitFlow__.pickle"
Error = FlowError
Results = FlowResults
@classmethod
def from_inputs(cls, workdir, inputs, manager=None, pickle_protocol=-1, task_class=ScfTask, work_class=Work):
"""
Construct a simple flow from a list of inputs. The flow contains a single Work with
tasks whose class is given by task_class.
.. warning::
Don't use this interface if you have dependencies among the tasks.
Args:
workdir: String specifying the directory where the works will be produced.
inputs: List of inputs.
manager: :class:`TaskManager` object responsible for the submission of the jobs.
If manager is None, the object is initialized from the yaml file
located either in the working directory or in the user configuration dir.
pickle_protocol: Pickle protocol version used for saving the status of the object.
-1 denotes the latest version supported by the python interpreter.
task_class: The class of the :class:`Task`.
work_class: The class of the :class:`Work`.
"""
if not isinstance(inputs, (list, tuple)): inputs = [inputs]
flow = cls(workdir, manager=manager, pickle_protocol=pickle_protocol)
work = work_class()
for inp in inputs:
work.register(inp, task_class=task_class)
flow.register_work(work)
return flow.allocate()
@classmethod
def as_flow(cls, obj):
"""Convert obj into a Flow. Accepts filepath, dict, or Flow object."""
if isinstance(obj, cls): return obj
if is_string(obj):
return cls.pickle_load(obj)
elif isinstance(obj, collections.Mapping):
return cls.from_dict(obj)
else:
raise TypeError("Don't know how to convert type %s into a Flow" % type(obj))
def __init__(self, workdir, manager=None, pickle_protocol=-1, remove=False):
"""
Args:
workdir: String specifying the directory where the works will be produced.
if workdir is None, the initialization of the working directory
is performed by flow.allocate(workdir).
manager: :class:`TaskManager` object responsible for the submission of the jobs.
If manager is None, the object is initialized from the yaml file
located either in the working directory or in the user configuration dir.
pickle_protocol: Pickle protocol version used for saving the status of the object.
-1 denotes the latest version supported by the python interpreter.
remove: attempt to remove working directory `workdir` if directory already exists.
"""
super(Flow, self).__init__()
if workdir is not None:
if remove and os.path.exists(workdir): shutil.rmtree(workdir)
self.set_workdir(workdir)
self.creation_date = time.asctime()
if manager is None: manager = TaskManager.from_user_config()
self.manager = manager.deepcopy()
# List of works.
self._works = []
self._waited = 0
# List of callbacks that must be executed when the dependencies reach S_OK
self._callbacks = []
# Install default list of handlers at the flow level.
# Users can override the default list by calling flow.install_event_handlers in the script.
# Example:
#
# # flow level (common case)
# flow.install_event_handlers(handlers=my_handlers)
#
# # task level (advanced mode)
# flow[0][0].install_event_handlers(handlers=my_handlers)
#
self.install_event_handlers()
self.pickle_protocol = int(pickle_protocol)
# ID used to access mongodb
self._mongo_id = None
# Save the location of the script used to generate the flow.
# This trick won't work if we are running with nosetests, py.test etc
pyfile = find_top_pyfile()
if "python" in pyfile or "ipython" in pyfile: pyfile = "<" + pyfile + ">"
self.set_pyfile(pyfile)
# TODO
# Signal slots: a dictionary with the list
# of callbacks indexed by node_id and SIGNAL_TYPE.
# When the node changes its status, it broadcast a signal.
# The flow is listening to all the nodes of the calculation
# [node_id][SIGNAL] = list_of_signal_handlers
#self._sig_slots = slots = {}
#for work in self:
# slots[work] = {s: [] for s in work.S_ALL}
#for task in self.iflat_tasks():
# slots[task] = {s: [] for s in work.S_ALL}
@pmg_serialize
def as_dict(self, **kwargs):
"""
JSON serialization, note that we only need to save
a string with the working directory since the object will be
reconstructed from the pickle file located in workdir
"""
return {"workdir": self.workdir}
# This is needed for fireworks.
to_dict = as_dict
@classmethod
def from_dict(cls, d, **kwargs):
"""Reconstruct the flow from the pickle file."""
return cls.pickle_load(d["workdir"], **kwargs)
@classmethod
def temporary_flow(cls, manager=None):
"""Return a Flow in a temporary directory. Useful for unit tests."""
return cls(workdir=tempfile.mkdtemp(), manager=manager)
def set_workdir(self, workdir, chroot=False):
"""
Set the working directory. Cannot be set more than once unless chroot is True
"""
if not chroot and hasattr(self, "workdir") and self.workdir != workdir:
raise ValueError("self.workdir != workdir: %s, %s" % (self.workdir, workdir))
# Directories with (input|output|temporary) data.
self.workdir = os.path.abspath(workdir)
self.indir = Directory(os.path.join(self.workdir, "indata"))
self.outdir = Directory(os.path.join(self.workdir, "outdata"))
self.tmpdir = Directory(os.path.join(self.workdir, "tmpdata"))
def reload(self):
"""
Reload the flow from the pickle file. Used when we are monitoring the flow
executed by the scheduler. In this case, indeed, the flow might have been changed
by the scheduler and we have to reload the new flow in memory.
"""
new = self.__class__.pickle_load(self.workdir)
self = new
@classmethod
def pickle_load(cls, filepath, spectator_mode=True, remove_lock=False):
"""
Loads the object from a pickle file and performs initial setup.
Args:
filepath: Filename or directory name. It filepath is a directory, we
scan the directory tree starting from filepath and we
read the first pickle database. Raise RuntimeError if multiple
databases are found.
spectator_mode: If True, the nodes of the flow are not connected by signals.
This option is usually used when we want to read a flow
in read-only mode and we want to avoid callbacks that can change the flow.
remove_lock:
True to remove the file lock if any (use it carefully).
"""
if os.path.isdir(filepath):
# Walk through each directory inside path and find the pickle database.
for dirpath, dirnames, filenames in os.walk(filepath):
fnames = [f for f in filenames if f == cls.PICKLE_FNAME]
if fnames:
if len(fnames) == 1:
filepath = os.path.join(dirpath, fnames[0])
break # Exit os.walk
else:
err_msg = "Found multiple databases:\n %s" % str(fnames)
raise RuntimeError(err_msg)
else:
err_msg = "Cannot find %s inside directory %s" % (cls.PICKLE_FNAME, filepath)
raise ValueError(err_msg)
if remove_lock and os.path.exists(filepath + ".lock"):
try:
os.remove(filepath + ".lock")
except:
pass
with FileLock(filepath):
with open(filepath, "rb") as fh:
flow = pmg_pickle_load(fh)
# Check if versions match.
if flow.VERSION != cls.VERSION:
msg = ("File flow version %s != latest version %s\n."
"Regenerate the flow to solve the problem " % (flow.VERSION, cls.VERSION))
warnings.warn(msg)
flow.set_spectator_mode(spectator_mode)
# Recompute the status of each task since tasks that
# have been submitted previously might be completed.
flow.check_status()
return flow
@classmethod
def pickle_loads(cls, s):
"""Reconstruct the flow from a string."""
strio = StringIO()
strio.write(s)
strio.seek(0)
flow = pmg_pickle_load(strio)
return flow
def __len__(self):
return len(self.works)
def __iter__(self):
return self.works.__iter__()
def __getitem__(self, slice):
return self.works[slice]
def set_pyfile(self, pyfile):
"""
Set the path of the python script used to generate the flow.
.. Example:
flow.set_pyfile(__file__)
"""
# TODO: Could use a frame hack to get the caller outside abinit
# so that pyfile is automatically set when we __init__ it!
self._pyfile = os.path.abspath(pyfile)
@property
def pyfile(self):
"""
Absolute path of the python script used to generate the flow. Set by `set_pyfile`
"""
try:
return self._pyfile
except AttributeError:
return None
@property
def pid_file(self):
"""The path of the pid file created by PyFlowScheduler."""
return os.path.join(self.workdir, "_PyFlowScheduler.pid")
def check_pid_file(self):
"""
This function checks if we are already running the :class:`Flow` with a :class:`PyFlowScheduler`.
Raises: Flow.Error if the pif file of the scheduler exists.
"""
if not os.path.exists(self.pid_file):
return 0
self.show_status()
raise self.Error("""\n\
pid_file
%s
already exists. There are two possibilities:
1) There's an another instance of PyFlowScheduler running
2) The previous scheduler didn't exit in a clean way
To solve case 1:
Kill the previous scheduler (use 'kill pid' where pid is the number reported in the file)
Then you can restart the new scheduler.
To solve case 2:
Remove the pid_file and restart the scheduler.
Exiting""" % self.pid_file)
@property
def pickle_file(self):
"""The path of the pickle file."""
return os.path.join(self.workdir, self.PICKLE_FNAME)
@property
def mongo_id(self):
return self._mongo_id
@mongo_id.setter
def mongo_id(self, value):
if self.mongo_id is not None:
raise RuntimeError("Cannot change mongo_id %s" % self.mongo_id)
self._mongo_id = value
def mongodb_upload(self, **kwargs):
from abiflows.core.scheduler import FlowUploader
FlowUploader().upload(self, **kwargs)
def validate_json_schema(self):
"""Validate the JSON schema. Return list of errors."""
errors = []
for work in self:
for task in work:
if not task.get_results().validate_json_schema():
errors.append(task)
if not work.get_results().validate_json_schema():
errors.append(work)
if not self.get_results().validate_json_schema():
errors.append(self)
return errors
def get_mongo_info(self):
"""
Return a JSON dictionary with information on the flow.
Mainly used for constructing the info section in `FlowEntry`.
The default implementation is empty. Subclasses must implement it
"""
return {}
def mongo_assimilate(self):
"""
This function is called by client code when the flow is completed
Return a JSON dictionary with the most important results produced
by the flow. The default implementation is empty. Subclasses must implement it
"""
return {}
@property
def works(self):
"""List of :class:`Work` objects contained in self.."""
return self._works
@property
def all_ok(self):
"""True if all the tasks in works have reached `S_OK`."""
return all(work.all_ok for work in self)
@property
def num_tasks(self):
"""Total number of tasks"""
return len(list(self.iflat_tasks()))
@property
def errored_tasks(self):
"""List of errored tasks."""
etasks = []
for status in [self.S_ERROR, self.S_QCRITICAL, self.S_ABICRITICAL]:
etasks.extend(list(self.iflat_tasks(status=status)))
return set(etasks)
@property
def num_errored_tasks(self):
"""The number of tasks whose status is `S_ERROR`."""
return len(self.errored_tasks)
@property
def unconverged_tasks(self):
"""List of unconverged tasks."""
return list(self.iflat_tasks(status=self.S_UNCONVERGED))
@property
def num_unconverged_tasks(self):
"""The number of tasks whose status is `S_UNCONVERGED`."""
return len(self.unconverged_tasks)
@property
def status_counter(self):
"""
Returns a :class:`Counter` object that counts the number of tasks with
given status (use the string representation of the status as key).
"""
# Count the number of tasks with given status in each work.
counter = self[0].status_counter
for work in self[1:]:
counter += work.status_counter
return counter
@property
def ncores_reserved(self):
"""
Returns the number of cores reserved in this moment.
A core is reserved if the task is not running but
we have submitted the task to the queue manager.
"""
return sum(work.ncores_reserved for work in self)
@property
def ncores_allocated(self):
"""
Returns the number of cores allocated in this moment.
A core is allocated if it's running a task or if we have
submitted a task to the queue manager but the job is still pending.
"""
return sum(work.ncores_allocated for work in self)
@property
def ncores_used(self):
"""
Returns the number of cores used in this moment.
A core is used if there's a job that is running on it.
"""
return sum(work.ncores_used for work in self)
@property
def has_chrooted(self):
"""
Returns a string that evaluates to True if we have changed
the workdir for visualization purposes e.g. we are using sshfs.
to mount the remote directory where the `Flow` is located.
The string gives the previous workdir of the flow.
"""
try:
return self._chrooted_from
except AttributeError:
return ""
def chroot(self, new_workdir):
"""
Change the workir of the :class:`Flow`. Mainly used for
allowing the user to open the GUI on the local host
and access the flow from remote via sshfs.
.. note::
Calling this method will make the flow go in read-only mode.
"""
self._chrooted_from = self.workdir
self.set_workdir(new_workdir, chroot=True)
for i, work in enumerate(self):
new_wdir = os.path.join(self.workdir, "w" + str(i))
work.chroot(new_wdir)
def groupby_status(self):
"""
Returns a ordered dictionary mapping the task status to
the list of named tuples (task, work_index, task_index).
"""
Entry = collections.namedtuple("Entry", "task wi ti")
d = collections.defaultdict(list)
for task, wi, ti in self.iflat_tasks_wti():
d[task.status].append(Entry(task, wi, ti))
# Sort keys according to their status.
return OrderedDict([(k, d[k]) for k in sorted(list(d.keys()))])
def groupby_task_class(self):
"""
Returns a dictionary mapping the task class to the list of tasks in the flow
"""
# Find all Task classes
class2tasks = OrderedDict()
for task in self.iflat_tasks():
cls = task.__class__
if cls not in class2tasks: class2tasks[cls] = []
class2tasks[cls].append(task)
return class2tasks
def iflat_nodes(self, status=None, op="==", nids=None):
"""
Generators that produces a flat sequence of nodes.
if status is not None, only the tasks with the specified status are selected.
nids is an optional list of node identifiers used to filter the nodes.
"""
nids = as_set(nids)
if status is None:
if not (nids and self.node_id not in nids):
yield self
for work in self:
if nids and work.node_id not in nids: continue
yield work
for task in work:
if nids and task.node_id not in nids: continue
yield task
else:
# Get the operator from the string.
op = operator_from_str(op)
# Accept Task.S_FLAG or string.
status = Status.as_status(status)
if not (nids and self.node_id not in nids):
if op(self.status, status): yield self
for wi, work in enumerate(self):
if nids and work.node_id not in nids: continue
if op(work.status, status): yield work
for ti, task in enumerate(work):
if nids and task.node_id not in nids: continue
if op(task.status, status): yield task
def node_from_nid(self, nid):
"""Return the node in the `Flow` with the given `nid` identifier"""
for node in self.iflat_nodes():
if node.node_id == nid: return node
raise ValueError("Cannot find node with node id: %s" % nid)
def iflat_tasks_wti(self, status=None, op="==", nids=None):
"""
Generator to iterate over all the tasks of the `Flow`.
Yields:
(task, work_index, task_index)
If status is not None, only the tasks whose status satisfies
the condition (task.status op status) are selected
status can be either one of the flags defined in the :class:`Task` class
(e.g Task.S_OK) or a string e.g "S_OK"
nids is an optional list of node identifiers used to filter the tasks.
"""
return self._iflat_tasks_wti(status=status, op=op, nids=nids, with_wti=True)
def iflat_tasks(self, status=None, op="==", nids=None):
"""
Generator to iterate over all the tasks of the :class:`Flow`.
If status is not None, only the tasks whose status satisfies
the condition (task.status op status) are selected
status can be either one of the flags defined in the :class:`Task` class
(e.g Task.S_OK) or a string e.g "S_OK"
nids is an optional list of node identifiers used to filter the tasks.
"""
return self._iflat_tasks_wti(status=status, op=op, nids=nids, with_wti=False)
def _iflat_tasks_wti(self, status=None, op="==", nids=None, with_wti=True):
"""
Generators that produces a flat sequence of task.
if status is not None, only the tasks with the specified status are selected.
nids is an optional list of node identifiers used to filter the tasks.
Returns:
(task, work_index, task_index) if with_wti is True else task
"""
nids = as_set(nids)
if status is None:
for wi, work in enumerate(self):
for ti, task in enumerate(work):
if nids and task.node_id not in nids: continue
if with_wti:
yield task, wi, ti
else:
yield task
else:
# Get the operator from the string.
op = operator_from_str(op)
# Accept Task.S_FLAG or string.
status = Status.as_status(status)
for wi, work in enumerate(self):
for ti, task in enumerate(work):
if nids and task.node_id not in nids: continue
if op(task.status, status):
if with_wti:
yield task, wi, ti
else:
yield task
def abivalidate_inputs(self):
"""
Run ABINIT in dry mode to validate all the inputs of the flow.
Return:
(isok, tuples)
isok is True if all inputs are ok.
tuples is List of `namedtuple` objects, one for each task in the flow.
Each namedtuple has the following attributes:
retcode: Return code. 0 if OK.
log_file: log file of the Abinit run, use log_file.read() to access its content.
stderr_file: stderr file of the Abinit run. use stderr_file.read() to access its content.
Raises:
`RuntimeError` if executable is not in $PATH.
"""
if not self.allocated:
self.build()
#self.build_and_pickle_dump()
isok, tuples = True, []
for task in self.iflat_tasks():
t = task.input.abivalidate()
if t.retcode != 0: isok = False
tuples.append(t)
return isok, tuples
def check_dependencies(self):
"""Test the dependencies of the nodes for possible deadlocks."""
deadlocks = []
for task in self.iflat_tasks():
for dep in task.deps:
if dep.node.depends_on(task):
deadlocks.append((task, dep.node))
if deadlocks:
lines = ["Detect wrong list of dependecies that will lead to a deadlock:"]
lines.extend(["%s <--> %s" % nodes for nodes in deadlocks])
raise RuntimeError("\n".join(lines))
def find_deadlocks(self):
"""
This function detects deadlocks
Return:
named tuple with the tasks grouped in: deadlocks, runnables, running
"""
# Find jobs that can be submitted and and the jobs that are already in the queue.
runnables = []
for work in self:
runnables.extend(work.fetch_alltasks_to_run())
runnables.extend(list(self.iflat_tasks(status=self.S_SUB)))
# Running jobs.
running = list(self.iflat_tasks(status=self.S_RUN))
# Find deadlocks.
err_tasks = self.errored_tasks
deadlocked = []
if err_tasks:
for task in self.iflat_tasks():
if any(task.depends_on(err_task) for err_task in err_tasks):
deadlocked.append(task)
return dict2namedtuple(deadlocked=deadlocked, runnables=runnables, running=running)
def check_status(self, **kwargs):
"""
Check the status of the works in self.
Args:
show: True to show the status of the flow.
kwargs: keyword arguments passed to show_status
"""
for work in self:
work.check_status()
if kwargs.pop("show", False):
self.show_status(**kwargs)
@property
def status(self):
"""The status of the :class:`Flow` i.e. the minimum of the status of its tasks and its works"""
return min(work.get_all_status(only_min=True) for work in self)
#def restart_unconverged_tasks(self, max_nlauch, excs):
# nlaunch = 0
# for task in self.unconverged_tasks:
# try:
# logger.info("Flow will try restart task %s" % task)
# fired = task.restart()
# if fired:
# nlaunch += 1
# max_nlaunch -= 1
# if max_nlaunch == 0:
# logger.info("Restart: too many jobs in the queue, returning")
# self.pickle_dump()
# return nlaunch, max_nlaunch
# except task.RestartError:
# excs.append(straceback())
# return nlaunch, max_nlaunch
def fix_abicritical(self):
"""
This function tries to fix critical events originating from ABINIT.
Returns the number of tasks that have been fixed.
"""
count = 0
for task in self.iflat_tasks(status=self.S_ABICRITICAL):
count += task.fix_abicritical()
return count
def fix_queue_critical(self):
"""
This function tries to fix critical events originating from the queue submission system.
Returns the number of tasks that have been fixed.
"""
count = 0
for task in self.iflat_tasks(status=self.S_QCRITICAL):
logger.info("Will try to fix task %s" % str(task))
try:
print(task.fix_queue_critical())
count += 1
except FixQueueCriticalError:
logger.info("Not able to fix task %s" % task)
return count
def show_info(self, **kwargs):
"""
Print info on the flow i.e. total number of tasks, works, tasks grouped by class.
Example:
Task Class Number
------------ --------
ScfTask 1
NscfTask 1
ScrTask 2
SigmaTask 6
"""
stream = kwargs.pop("stream", sys.stdout)
lines = [str(self)]
app = lines.append
app("Number of works: %d, total number of tasks: %s" % (len(self), self.num_tasks) )
app("Number of tasks with a given class:")
# Build Table
data = [[cls.__name__, len(tasks)]
for cls, tasks in self.groupby_task_class().items()]
app(str(tabulate(data, headers=["Task Class", "Number"])))
stream.write("\n".join(lines))
def show_summary(self, **kwargs):
"""
Print a short summary with the status of the flow and a counter task_status --> number_of_tasks
Args:
stream: File-like object, Default: sys.stdout
Example:
Status Count
--------- -------
Completed 10
<Flow, node_id=27163, workdir=flow_gwconv_ecuteps>, num_tasks=10, all_ok=True
"""
stream = kwargs.pop("stream", sys.stdout)
stream.write("\n")
table = list(self.status_counter.items())
s = tabulate(table, headers=["Status", "Count"])
stream.write(s + "\n")
stream.write("\n")
stream.write("%s, num_tasks=%s, all_ok=%s\n" % (str(self), self.num_tasks, self.all_ok))
stream.write("\n")
def show_status(self, **kwargs):
"""
Report the status of the works and the status of the different tasks on the specified stream.
Args:
stream: File-like object, Default: sys.stdout
nids: List of node identifiers. By defaults all nodes are shown
wslice: Slice object used to select works.
verbose: Verbosity level (default 0). > 0 to show only the works that are not finalized.
"""
stream = kwargs.pop("stream", sys.stdout)
nids = as_set(kwargs.pop("nids", None))
wslice = kwargs.pop("wslice", None)
verbose = kwargs.pop("verbose", 0)
wlist = None
if wslice is not None:
# Convert range to list of work indices.
wlist = list(range(wslice.start, wslice.step, wslice.stop))
#has_colours = stream_has_colours(stream)
has_colours = True
red = "red" if has_colours else None
for i, work in enumerate(self):
if nids and work.node_id not in nids: continue
print("", file=stream)
cprint_map("Work #%d: %s, Finalized=%s" % (i, work, work.finalized), cmap={"True": "green"}, file=stream)
if wlist is not None and i in wlist: continue
if verbose == 0 and work.finalized:
print(" Finalized works are not shown. Use verbose > 0 to force output.", file=stream)
continue
headers = ["Task", "Status", "Queue", "MPI|Omp|Gb",
"Warn|Com", "Class", "Sub|Rest|Corr", "Time",
"Node_ID"]
table = []
tot_num_errors = 0
for task in work:
if nids and task.node_id not in nids: continue
task_name = os.path.basename(task.name)
# FIXME: This should not be done here.
# get_event_report should be called only in check_status
# Parse the events in the main output.
report = task.get_event_report()
# Get time info (run-time or time in queue or None)
stime = None
timedelta = task.datetimes.get_runtime()
if timedelta is not None:
stime = str(timedelta) + "R"
else:
timedelta = task.datetimes.get_time_inqueue()
if timedelta is not None:
stime = str(timedelta) + "Q"
events = "|".join(2*["NA"])
if report is not None:
events = '{:>4}|{:>3}'.format(*map(str, (
report.num_warnings, report.num_comments)))
para_info = '{:>4}|{:>3}|{:>3}'.format(*map(str, (
task.mpi_procs, task.omp_threads, "%.1f" % task.mem_per_proc.to("Gb"))))
task_info = list(map(str, [task.__class__.__name__,
(task.num_launches, task.num_restarts, task.num_corrections), stime, task.node_id]))
qinfo = "None"
if task.queue_id is not None:
qinfo = str(task.queue_id) + "@" + str(task.qname)
if task.status.is_critical:
tot_num_errors += 1
task_name = colored(task_name, red)
if has_colours:
table.append([task_name, task.status.colored, qinfo,
para_info, events] + task_info)
else:
table.append([task_name, str(task.status), qinfo, events,
para_info] + task_info)
# Print table and write colorized line with the total number of errors.
print(tabulate(table, headers=headers, tablefmt="grid"), file=stream)
if tot_num_errors:
cprint("Total number of errors: %d" % tot_num_errors, "red", file=stream)
print("", file=stream)
if self.all_ok:
cprint("\nall_ok reached\n", "green", file=stream)
def show_events(self, status=None, nids=None):
"""
Print the Abinit events (ERRORS, WARNIING, COMMENTS) to stdout
Args:
status: if not None, only the tasks with this status are select
nids: optional list of node identifiers used to filter the tasks.
"""
nrows, ncols = get_terminal_size()
for task in self.iflat_tasks(status=status, nids=nids):
report = task.get_event_report()
if report:
print(make_banner(str(task), width=ncols, mark="="))
print(report)
#report = report.filter_types()
def show_corrections(self, status=None, nids=None):
"""
Show the corrections applied to the flow at run-time.
Args:
status: if not None, only the tasks with this status are select.
nids: optional list of node identifiers used to filter the tasks.
Return: The number of corrections found.
"""
nrows, ncols = get_terminal_size()
count = 0
for task in self.iflat_tasks(status=status, nids=nids):
if task.num_corrections == 0: continue
count += 1
print(make_banner(str(task), width=ncols, mark="="))
for corr in task.corrections:
pprint(corr)
if not count: print("No correction found.")
return count
def show_history(self, status=None, nids=None, full_history=False, metadata=False):
"""
Print the history of the flow to stdout.
Args:
status: if not None, only the tasks with this status are select
full_history: Print full info set, including nodes with an empty history.
nids: optional list of node identifiers used to filter the tasks.
metadata: print history metadata (experimental)
"""
nrows, ncols = get_terminal_size()
works_done = []
# Loop on the tasks and show the history of the work is not in works_done
for task in self.iflat_tasks(status=status, nids=nids):
work = task.work
if work not in works_done:
works_done.append(work)
if work.history or full_history:
cprint(make_banner(str(work), width=ncols, mark="="), **work.status.color_opts)
print(work.history.to_string(metadata=metadata))
if task.history or full_history:
cprint(make_banner(str(task), width=ncols, mark="="), **task.status.color_opts)
print(task.history.to_string(metadata=metadata))
# Print the history of the flow.
if self.history or full_history:
cprint(make_banner(str(self), width=ncols, mark="="), **self.status.color_opts)
print(self.history.to_string(metadata=metadata))
def show_inputs(self, varnames=None, nids=None, wslice=None, stream=sys.stdout):
"""
Print the input of the tasks to the given stream.
Args:
varnames:
List of Abinit variables. If not None, only the variable in varnames
are selected and printed.
nids:
List of node identifiers. By defaults all nodes are shown
wslice:
Slice object used to select works.
stream:
File-like object, Default: sys.stdout
"""
if varnames is not None:
# Build dictionary varname --> [(task1, value), (task2, value), ...]
varnames = [s.strip() for s in list_strings(varnames)]
dlist = collections.defaultdict(list)
for task in self.select_tasks(nids=nids, wslice=wslice):
dstruct = task.input.structure.as_dict(fmt="abivars")
for vname in varnames:
value = task.input.get(vname, None)
if value is None: # maybe in structure?
value = dstruct.get(vname, None)
if value is not None:
dlist[vname].append((task, value))
for vname in varnames:
tv_list = dlist[vname]
if not tv_list:
stream.write("[%s]: Found 0 tasks with this variable\n" % vname)
else:
stream.write("[%s]: Found %s tasks with this variable\n" % (vname, len(tv_list)))
for i, (task, value) in enumerate(tv_list):
stream.write(" %s --> %s\n" % (str(value), task))
stream.write("\n")
else:
lines = []
for task in self.select_tasks(nids=nids, wslice=wslice):
s = task.make_input(with_header=True)
# Add info on dependencies.
if task.deps:
s += "\n\nDependencies:\n" + "\n".join(str(dep) for dep in task.deps)
else:
s += "\n\nDependencies: None"
lines.append(2*"\n" + 80 * "=" + "\n" + s + 2*"\n")
stream.writelines(lines)
def listext(self, ext, stream=sys.stdout):
"""
Print to the given `stream` a table with the list of the output files
with the given `ext` produced by the flow.
"""
nodes_files = []
for node in self.iflat_nodes():
filepath = node.outdir.has_abiext(ext)
if filepath:
nodes_files.append((node, File(filepath)))
if nodes_files:
print("Found %s files with extension %s produced by the flow" % (len(nodes_files), ext), file=stream)
table = [[f.relpath, "%.2f" % (f.get_stat().st_size / 1024**2),
node.node_id, node.__class__.__name__]
for node, f in nodes_files]
print(tabulate(table, headers=["File", "Size [Mb]", "Node_ID", "Node Class"]), file=stream)
else:
print("No output file with extension %s has been produced by the flow" % ext, file=stream)
def select_tasks(self, nids=None, wslice=None):
"""
Return a list with a subset of tasks.
Args:
nids: List of node identifiers.
wslice: Slice object used to select works.
.. note::
nids and wslice are mutually exclusive.
If no argument is provided, the full list of tasks is returned.
"""
if nids is not None:
assert wslice is None
tasks = self.tasks_from_nids(nids)
elif wslice is not None:
tasks = []
for work in self[wslice]:
tasks.extend([t for t in work])
else:
# All tasks selected if no option is provided.
tasks = list(self.iflat_tasks())
return tasks
def inspect(self, nids=None, wslice=None, **kwargs):
"""
Inspect the tasks (SCF iterations, Structural relaxation ...) and
produces matplotlib plots.
Args:
nids: List of node identifiers.
wslice: Slice object used to select works.
kwargs: keyword arguments passed to `task.inspect` method.
.. note::
nids and wslice are mutually exclusive.
If nids and wslice are both None, all tasks in self are inspected.
Returns:
List of `matplotlib` figures.
"""
figs = []
for task in self.select_tasks(nids=nids, wslice=wslice):
if hasattr(task, "inspect"):
fig = task.inspect(**kwargs)
if fig is None:
cprint("Cannot inspect Task %s" % task, color="blue")
else:
figs.append(fig)
else:
cprint("Task %s does not provide an inspect method" % task, color="blue")
return figs
def get_results(self, **kwargs):
results = self.Results.from_node(self)
results.update(self.get_dict_for_mongodb_queries())
return results
def get_dict_for_mongodb_queries(self):
"""
This function returns a dictionary with the attributes that will be
put in the mongodb document to facilitate the query.
Subclasses may want to replace or extend the default behaviour.
"""
d = {}
return d
# TODO
all_structures = [task.input.structure for task in self.iflat_tasks()]
all_pseudos = [task.input.pseudos for task in self.iflat_tasks()]
def look_before_you_leap(self):
"""
This method should be called before running the calculation to make
sure that the most important requirements are satisfied.
Return:
List of strings with inconsistencies/errors.
"""
errors = []
try:
self.check_dependencies()
except self.Error as exc:
errors.append(str(exc))
if self.has_db:
try:
self.manager.db_connector.get_collection()
except Exception as exc:
errors.append("""
ERROR while trying to connect to the MongoDB database:
Exception:
%s
Connector:
%s
""" % (exc, self.manager.db_connector))
return "\n".join(errors)
@property
def has_db(self):
"""True if flow uses `MongoDB` to store the results."""
return self.manager.has_db
def db_insert(self):
"""
Insert results in the `MongDB` database.
"""
assert self.has_db
# Connect to MongoDb and get the collection.
coll = self.manager.db_connector.get_collection()
print("Mongodb collection %s with count %d", coll, coll.count())
start = time.time()
for work in self:
for task in work:
results = task.get_results()
pprint(results)
results.update_collection(coll)
results = work.get_results()
pprint(results)
results.update_collection(coll)
print("MongoDb update done in %s [s]" % time.time() - start)
results = self.get_results()
pprint(results)
results.update_collection(coll)
# Update the pickle file to save the mongo ids.
self.pickle_dump()
for d in coll.find():
pprint(d)
def tasks_from_nids(self, nids):
"""
Return the list of tasks associated to the given list of node identifiers (nids).
.. note::
Invalid ids are ignored
"""
if not isinstance(nids, collections.Iterable): nids = [nids]
tasks = []
for nid in nids:
for task in self.iflat_tasks():
if task.node_id == nid:
tasks.append(task)
break
return tasks
def wti_from_nids(self, nids):
"""Return the list of (w, t) indices from the list of node identifiers nids."""
return [task.pos for task in self.tasks_from_nids(nids)]
def open_files(self, what="o", status=None, op="==", nids=None, editor=None):
"""
Open the files of the flow inside an editor (command line interface).
Args:
what: string with the list of characters selecting the file type
Possible choices:
i ==> input_file,
o ==> output_file,
f ==> files_file,
j ==> job_file,
l ==> log_file,
e ==> stderr_file,
q ==> qout_file,
all ==> all files.
status: if not None, only the tasks with this status are select
op: status operator. Requires status. A task is selected
if task.status op status evaluates to true.
nids: optional list of node identifiers used to filter the tasks.
editor: Select the editor. None to use the default editor ($EDITOR shell env var)
"""
# Build list of files to analyze.
files = []
for task in self.iflat_tasks(status=status, op=op, nids=nids):
lst = task.select_files(what)
if lst:
files.extend(lst)
return Editor(editor=editor).edit_files(files)
def parse_timing(self, nids=None):
"""
Parse the timer data in the main output file(s) of Abinit.
Requires timopt /= 0 in the input file (usually timopt = -1)
Args:
nids: optional list of node identifiers used to filter the tasks.
Return: :class:`AbinitTimerParser` instance, None if error.
"""
# Get the list of output files according to nids.
paths = [task.output_file.path for task in self.iflat_tasks(nids=nids)]
# Parse data.
from .abitimer import AbinitTimerParser
parser = AbinitTimerParser()
read_ok = parser.parse(paths)
if read_ok:
return parser
return None
def show_abierrors(self, nids=None, stream=sys.stdout):
"""
Write to the given stream the list of ABINIT errors for all tasks whose status is S_ABICRITICAL.
Args:
nids: optional list of node identifiers used to filter the tasks.
stream: File-like object. Default: sys.stdout
"""
lines = []
app = lines.append
for task in self.iflat_tasks(status=self.S_ABICRITICAL, nids=nids):
header = "=== " + task.qout_file.path + "==="
app(header)
report = task.get_event_report()
if report is not None:
app("num_errors: %s, num_warnings: %s, num_comments: %s" % (
report.num_errors, report.num_warnings, report.num_comments))
app("*** ERRORS ***")
app("\n".join(str(e) for e in report.errors))
app("*** BUGS ***")
app("\n".join(str(b) for b in report.bugs))
else:
app("get_envent_report returned None!")
app("=" * len(header) + 2*"\n")
return stream.writelines(lines)
def show_qouts(self, nids=None, stream=sys.stdout):
"""
Write to the given stream the content of the queue output file for all tasks whose status is S_QCRITICAL.
Args:
nids: optional list of node identifiers used to filter the tasks.
stream: File-like object. Default: sys.stdout
"""
lines = []
for task in self.iflat_tasks(status=self.S_QCRITICAL, nids=nids):
header = "=== " + task.qout_file.path + "==="
lines.append(header)
if task.qout_file.exists:
with open(task.qout_file.path, "rt") as fh:
lines += fh.readlines()
else:
lines.append("File does not exist!")
lines.append("=" * len(header) + 2*"\n")
return stream.writelines(lines)
def debug(self, status=None, nids=None):
"""
This method is usually used when the flow didn't completed succesfully
It analyzes the files produced the tasks to facilitate debugging.
Info are printed to stdout.
Args:
status: If not None, only the tasks with this status are selected
nids: optional list of node identifiers used to filter the tasks.
"""
nrows, ncols = get_terminal_size()
# Test for scheduler exceptions first.
sched_excfile = os.path.join(self.workdir, "_exceptions")
if os.path.exists(sched_excfile):
with open(sched_excfile, "r") as fh:
cprint("Found exceptions raised by the scheduler", "red")
cprint(fh.read(), color="red")
return
if status is not None:
tasks = list(self.iflat_tasks(status=status, nids=nids))
else:
errors = list(self.iflat_tasks(status=self.S_ERROR, nids=nids))
qcriticals = list(self.iflat_tasks(status=self.S_QCRITICAL, nids=nids))
abicriticals = list(self.iflat_tasks(status=self.S_ABICRITICAL, nids=nids))
tasks = errors + qcriticals + abicriticals
# For each task selected:
# 1) Check the error files of the task. If not empty, print the content to stdout and we are done.
# 2) If error files are empty, look at the master log file for possible errors
# 3) If also this check failes, scan all the process log files.
# TODO: This check is not needed if we introduce a new __abinit_error__ file
# that is created by the first MPI process that invokes MPI abort!
#
ntasks = 0
for task in tasks:
print(make_banner(str(task), width=ncols, mark="="))
ntasks += 1
# Start with error files.
for efname in ["qerr_file", "stderr_file",]:
err_file = getattr(task, efname)
if err_file.exists:
s = err_file.read()
if not s: continue
print(make_banner(str(err_file), width=ncols, mark="="))
cprint(s, color="red")
#count += 1
# Check main log file.
try:
report = task.get_event_report()
if report and report.num_errors:
print(make_banner(os.path.basename(report.filename), width=ncols, mark="="))
s = "\n".join(str(e) for e in report.errors)
else:
s = None
except Exception as exc:
s = str(exc)
count = 0 # count > 0 means we found some useful info that could explain the failures.
if s is not None:
cprint(s, color="red")
count += 1
if not count:
# Inspect all log files produced by the other nodes.
log_files = task.tmpdir.list_filepaths(wildcard="*LOG_*")
if not log_files:
cprint("No *LOG_* file in tmpdir. This usually happens if you are running with many CPUs", color="magenta")
for log_file in log_files:
try:
report = EventsParser().parse(log_file)
if report.errors:
print(report)
count += 1
break
except Exception as exc:
cprint(str(exc), color="red")
count += 1
break
if not count:
cprint("Houston, we could not find any error message that can explain the problem", color="magenta")
print("Number of tasks analyzed: %d" % ntasks)
def cancel(self, nids=None):
"""
Cancel all the tasks that are in the queue.
nids is an optional list of node identifiers used to filter the tasks.
Returns:
Number of jobs cancelled, negative value if error
"""
if self.has_chrooted:
# TODO: Use paramiko to kill the job?
warnings.warn("Cannot cancel the flow via sshfs!")
return -1
# If we are running with the scheduler, we must send a SIGKILL signal.
if os.path.exists(self.pid_file):
cprint("Found scheduler attached to this flow.", "yellow")
cprint("Sending SIGKILL to the scheduler before cancelling the tasks!", "yellow")
with open(self.pid_file, "r") as fh:
pid = int(fh.readline())
retcode = os.system("kill -9 %d" % pid)
self.history.info("Sent SIGKILL to the scheduler, retcode: %s" % retcode)
try:
os.remove(self.pid_file)
except IOError:
pass
num_cancelled = 0
for task in self.iflat_tasks(nids=nids):
num_cancelled += task.cancel()
return num_cancelled
def get_njobs_in_queue(self, username=None):
"""
returns the number of jobs in the queue, None when the number of jobs cannot be determined.
Args:
username: (str) the username of the jobs to count (default is to autodetect)
"""
return self.manager.qadapter.get_njobs_in_queue(username=username)
def rmtree(self, ignore_errors=False, onerror=None):
"""Remove workdir (same API as shutil.rmtree)."""
if not os.path.exists(self.workdir): return
shutil.rmtree(self.workdir, ignore_errors=ignore_errors, onerror=onerror)
def rm_and_build(self):
"""Remove the workdir and rebuild the flow."""
self.rmtree()
self.build()
def build(self, *args, **kwargs):
"""Make directories and files of the `Flow`."""
# Allocate here if not done yet!
if not self.allocated: self.allocate()
self.indir.makedirs()
self.outdir.makedirs()
self.tmpdir.makedirs()
# Check the nodeid file in workdir
nodeid_path = os.path.join(self.workdir, ".nodeid")
if os.path.exists(nodeid_path):
with open(nodeid_path, "rt") as fh:
node_id = int(fh.read())
if self.node_id != node_id:
msg = ("\nFound node_id %s in file:\n\n %s\n\nwhile the node_id of the present flow is %d.\n"
"This means that you are trying to build a new flow in a directory already used by another flow.\n"
"Possible solutions:\n"
" 1) Change the workdir of the new flow.\n"
" 2) remove the old directory either with `rm -rf` or by calling the method flow.rmtree()\n"
% (node_id, nodeid_path, self.node_id))
raise RuntimeError(msg)
else:
with open(nodeid_path, "wt") as fh:
fh.write(str(self.node_id))
for work in self:
work.build(*args, **kwargs)
def build_and_pickle_dump(self, abivalidate=False):
"""
Build dirs and file of the `Flow` and save the object in pickle format.
Returns 0 if success
Args:
abivalidate: If True, all the input files are validate by calling
the abinit parser. If the validation fails, ValueError is raise.
"""
self.build()
if not abivalidate: return self.pickle_dump()
# Validation with Abinit.
isok, errors = self.abivalidate_inputs()
if isok: return self.pickle_dump()
errlines = []
for i, e in enumerate(errors):
errlines.append("[%d] %s" % (i, e))
raise ValueError("\n".join(errlines))
@check_spectator
def pickle_dump(self):
"""
Save the status of the object in pickle format.
Returns 0 if success
"""
if self.has_chrooted:
warnings.warn("Cannot pickle_dump since we have chrooted from %s" % self.has_chrooted)
return -1
#if self.in_spectator_mode:
# warnings.warn("Cannot pickle_dump since flow is in_spectator_mode")
# return -2
protocol = self.pickle_protocol
# Atomic transaction with FileLock.
with FileLock(self.pickle_file):
with AtomicFile(self.pickle_file, mode="wb") as fh:
pmg_pickle_dump(self, fh, protocol=protocol)
return 0
def pickle_dumps(self, protocol=None):
"""
Return a string with the pickle representation.
`protocol` selects the pickle protocol. self.pickle_protocol is
used if `protocol` is None
"""
strio = StringIO()
pmg_pickle_dump(self, strio,
protocol=self.pickle_protocol if protocol is None
else protocol)
return strio.getvalue()
def register_task(self, input, deps=None, manager=None, task_class=None):
"""
Utility function that generates a `Work` made of a single task
Args:
input: :class:`AbinitInput`
deps: List of :class:`Dependency` objects specifying the dependency of this node.
An empy list of deps implies that this node has no dependencies.
manager: The :class:`TaskManager` responsible for the submission of the task.
If manager is None, we use the :class:`TaskManager` specified during the creation of the work.
task_class: Task subclass to instantiate. Default: :class:`AbinitTask`
Returns:
The generated :class:`Work` for the task, work[0] is the actual task.
"""
work = Work(manager=manager)
task = work.register(input, deps=deps, task_class=task_class)
self.register_work(work)
return work
def register_work(self, work, deps=None, manager=None, workdir=None):
"""
Register a new :class:`Work` and add it to the internal list, taking into account possible dependencies.
Args:
work: :class:`Work` object.
deps: List of :class:`Dependency` objects specifying the dependency of this node.
An empy list of deps implies that this node has no dependencies.
manager: The :class:`TaskManager` responsible for the submission of the task.
If manager is None, we use the `TaskManager` specified during the creation of the work.
workdir: The name of the directory used for the :class:`Work`.
Returns:
The registered :class:`Work`.
"""
if getattr(self, "workdir", None) is not None:
# The flow has a directory, build the named of the directory of the work.
work_workdir = None
if workdir is None:
work_workdir = os.path.join(self.workdir, "w" + str(len(self)))
else:
work_workdir = os.path.join(self.workdir, os.path.basename(workdir))
work.set_workdir(work_workdir)
if manager is not None:
work.set_manager(manager)
self.works.append(work)
if deps:
deps = [Dependency(node, exts) for node, exts in deps.items()]
work.add_deps(deps)
return work
def register_work_from_cbk(self, cbk_name, cbk_data, deps, work_class, manager=None):
"""
Registers a callback function that will generate the :class:`Task` of the :class:`Work`.
Args:
cbk_name: Name of the callback function (must be a bound method of self)
cbk_data: Additional data passed to the callback function.
deps: List of :class:`Dependency` objects specifying the dependency of the work.
work_class: :class:`Work` class to instantiate.
manager: The :class:`TaskManager` responsible for the submission of the task.
If manager is None, we use the `TaskManager` specified during the creation of the :class:`Flow`.
Returns:
The :class:`Work` that will be finalized by the callback.
"""
# TODO: pass a Work factory instead of a class
# Directory of the Work.
work_workdir = os.path.join(self.workdir, "w" + str(len(self)))
# Create an empty work and register the callback
work = work_class(workdir=work_workdir, manager=manager)
self._works.append(work)
deps = [Dependency(node, exts) for node, exts in deps.items()]
if not deps:
raise ValueError("A callback must have deps!")
work.add_deps(deps)
# Wrap the callable in a Callback object and save
# useful info such as the index of the work and the callback data.
cbk = FlowCallback(cbk_name, self, deps=deps, cbk_data=cbk_data)
self._callbacks.append(cbk)
return work
@property
def allocated(self):
"""Numer of allocations. Set by `allocate`."""
try:
return self._allocated
except AttributeError:
return 0
def allocate(self, workdir=None):
"""
Allocate the `Flow` i.e. assign the `workdir` and (optionally)
the :class:`TaskManager` to the different tasks in the Flow.
Args:
workdir: Working directory of the flow. Must be specified here
if we haven't initialized the workdir in the __init__.
"""
if workdir is not None:
# We set the workdir of the flow here
self.set_workdir(workdir)
for i, work in enumerate(self):
work.set_workdir(os.path.join(self.workdir, "w" + str(i)))
if not hasattr(self, "workdir"):
raise RuntimeError("You must call flow.allocate(workdir) if the workdir is not passed to __init__")
for work in self:
# Each work has a reference to its flow.
work.allocate(manager=self.manager)
work.set_flow(self)
# Each task has a reference to its work.
for task in work:
task.set_work(work)
self.check_dependencies()
if not hasattr(self, "_allocated"): self._allocated = 0
self._allocated += 1
return self
def use_smartio(self):
"""
This function should be called when the entire `Flow` has been built.
It tries to reduce the pressure on the hard disk by using Abinit smart-io
capabilities for those files that are not needed by other nodes.
Smart-io means that big files (e.g. WFK) are written only if the calculation
is unconverged so that we can restart from it. No output is produced if
convergence is achieved.
"""
if not self.allocated:
self.allocate()
#raise RuntimeError("You must call flow.allocate before invoking flow.use_smartio")
return
for task in self.iflat_tasks():
children = task.get_children()
if not children:
# Change the input so that output files are produced
# only if the calculation is not converged.
task.history.info("Will disable IO for task")
task.set_vars(prtwf=-1, prtden=0) # TODO: prt1wf=-1,
else:
must_produce_abiexts = []
for child in children:
# Get the list of dependencies. Find that task
for d in child.deps:
must_produce_abiexts.extend(d.exts)
must_produce_abiexts = set(must_produce_abiexts)
#print("must_produce_abiexts", must_produce_abiexts)
# Variables supporting smart-io.
smart_prtvars = {
"prtwf": "WFK",
}
# Set the variable to -1 to disable the output
for varname, abiext in smart_prtvars.items():
if abiext not in must_produce_abiexts:
print("%s: setting %s to -1" % (task, varname))
task.set_vars({varname: -1})
#def new_from_input_decorators(self, new_workdir, decorators)
# """
# Return a new :class:`Flow` in which all the Abinit inputs have been
# decorated by decorators.
# """
# # The trick part here is how to assign a new id to the new nodes while maintaing the
# # correct dependencies! The safest approach would be to pass through __init__
# # instead of using copy.deepcopy()
# return flow
def show_dependencies(self, stream=sys.stdout):
"""Writes to the given stream the ASCII representation of the dependency tree."""
def child_iter(node):
return [d.node for d in node.deps]
def text_str(node):
return colored(str(node), color=node.status.color_opts["color"])
for task in self.iflat_tasks():
print(draw_tree(task, child_iter, text_str), file=stream)
def on_dep_ok(self, signal, sender):
# TODO
# Replace this callback with dynamic dispatch
# on_all_S_OK for work
# on_S_OK for task
logger.info("on_dep_ok with sender %s, signal %s" % (str(sender), signal))
for i, cbk in enumerate(self._callbacks):
if not cbk.handle_sender(sender):
logger.info("%s does not handle sender %s" % (cbk, sender))
continue
if not cbk.can_execute():
logger.info("Cannot execute %s" % cbk)
continue
# Execute the callback and disable it
self.history.info("flow in on_dep_ok: about to execute callback %s" % str(cbk))
cbk()
cbk.disable()
# Update the database.
self.pickle_dump()
@check_spectator
def finalize(self):
"""
This method is called when the flow is completed.
Return 0 if success
"""
if self.finalized:
self.history.warning("Calling finalize on an already finalized flow.")
return 1
self.history.info("Calling flow.finalize.")
self.finalized = True
if self.has_db:
self.history.info("Saving results in database.")
try:
self.flow.db_insert()
self.finalized = True
except Exception:
logger.critical("MongoDb insertion failed.")
return 2
# Here we remove the big output files if we have the garbage collector
# and the policy is set to "flow."
if self.gc is not None and self.gc.policy == "flow":
self.history.info("gc.policy set to flow. Will clean task output files.")
for task in self.iflat_tasks():
task.clean_output_files()
return 0
def set_garbage_collector(self, exts=None, policy="task"):
"""
Enable the garbage collector that will remove the big output files that are not needed.
Args:
exts: string or list with the Abinit file extensions to be removed. A default is
provided if exts is None
policy: Either `flow` or `task`. If policy is set to 'task', we remove the output
files as soon as the task reaches S_OK. If 'flow', the files are removed
only when the flow is finalized. This option should be used when we are dealing
with a dynamic flow with callbacks generating other tasks since a :class:`Task`
might not be aware of its children when it reached S_OK.
"""
assert policy in ("task", "flow")
exts = list_strings(exts) if exts is not None else ("WFK", "SUS", "SCR", "BSR", "BSC")
gc = GarbageCollector(exts=set(exts), policy=policy)
self.set_gc(gc)
for work in self:
#work.set_gc(gc) # TODO Add support for Works and flow policy
for task in work:
task.set_gc(gc)
def connect_signals(self):
"""
Connect the signals within the `Flow`.
The `Flow` is responsible for catching the important signals raised from its works.
"""
# Connect the signals inside each Work.
for work in self:
work.connect_signals()
# Observe the nodes that must reach S_OK in order to call the callbacks.
for cbk in self._callbacks:
#cbk.enable()
for dep in cbk.deps:
logger.info("connecting %s \nwith sender %s, signal %s" % (str(cbk), dep.node, dep.node.S_OK))
dispatcher.connect(self.on_dep_ok, signal=dep.node.S_OK, sender=dep.node, weak=False)
# Associate to each signal the callback _on_signal
# (bound method of the node that will be called by `Flow`
# Each node will set its attribute _done_signal to True to tell
# the flow that this callback should be disabled.
# Register the callbacks for the Work.
#for work in self:
# slot = self._sig_slots[work]
# for signal in S_ALL:
# done_signal = getattr(work, "_done_ " + signal, False)
# if not done_sig:
# cbk_name = "_on_" + str(signal)
# cbk = getattr(work, cbk_name, None)
# if cbk is None: continue
# slot[work][signal].append(cbk)
# print("connecting %s\nwith sender %s, signal %s" % (str(cbk), dep.node, dep.node.S_OK))
# dispatcher.connect(self.on_dep_ok, signal=signal, sender=dep.node, weak=False)
# Register the callbacks for the Tasks.
#self.show_receivers()
def disconnect_signals(self):
"""Disable the signals within the `Flow`."""
# Disconnect the signals inside each Work.
for work in self:
work.disconnect_signals()
# Disable callbacks.
for cbk in self._callbacks:
cbk.disable()
def show_receivers(self, sender=None, signal=None):
sender = sender if sender is not None else dispatcher.Any
signal = signal if signal is not None else dispatcher.Any
print("*** live receivers ***")
for rec in dispatcher.liveReceivers(dispatcher.getReceivers(sender, signal)):
print("receiver -->", rec)
print("*** end live receivers ***")
def set_spectator_mode(self, mode=True):
"""
When the flow is in spectator_mode, we have to disable signals, pickle dump and possible callbacks
A spectator can still operate on the flow but the new status of the flow won't be saved in
the pickle file. Usually the flow is in spectator mode when we are already running it via
the scheduler or other means and we should not interfere with its evolution.
This is the reason why signals and callbacks must be disabled.
Unfortunately preventing client-code from calling methods with side-effects when
the flow is in spectator mode is not easy (e.g. flow.cancel will cancel the tasks submitted to the
queue and the flow used by the scheduler won't see this change!
"""
# Set the flags of all the nodes in the flow.
mode = bool(mode)
self.in_spectator_mode = mode
for node in self.iflat_nodes():
node.in_spectator_mode = mode
# connect/disconnect signals depending on mode.
if not mode:
self.connect_signals()
else:
self.disconnect_signals()
#def get_results(self, **kwargs)
def rapidfire(self, check_status=True, **kwargs):
"""
Use :class:`PyLauncher` to submits tasks in rapidfire mode.
kwargs contains the options passed to the launcher.
Return:
number of tasks submitted.
"""
self.check_pid_file()
self.set_spectator_mode(False)
if check_status: self.check_status()
from .launcher import PyLauncher
return PyLauncher(self, **kwargs).rapidfire()
def single_shot(self, check_status=True, **kwargs):
"""
Use :class:`PyLauncher` to submits one task.
kwargs contains the options passed to the launcher.
Return:
number of tasks submitted.
"""
self.check_pid_file()
self.set_spectator_mode(False)
if check_status: self.check_status()
from .launcher import PyLauncher
return PyLauncher(self, **kwargs).single_shot()
def make_scheduler(self, **kwargs):
"""
Build a return a :class:`PyFlowScheduler` to run the flow.
Args:
kwargs: if empty we use the user configuration file.
if `filepath` in kwargs we init the scheduler from filepath.
else pass **kwargs to :class:`PyFlowScheduler` __init__ method.
"""
from .launcher import PyFlowScheduler
if not kwargs:
# User config if kwargs is empty
sched = PyFlowScheduler.from_user_config()
else:
# Use from_file if filepath if present, else call __init__
filepath = kwargs.pop("filepath", None)
if filepath is not None:
assert not kwargs
sched = PyFlowScheduler.from_file(filepath)
else:
sched = PyFlowScheduler(**kwargs)
sched.add_flow(self)
return sched
def batch(self, timelimit=None):
"""
Run the flow in batch mode, return exit status of the job script.
Requires a manager.yml file and a batch_adapter adapter.
Args:
timelimit: Time limit (int with seconds or string with time given with the slurm convention:
"days-hours:minutes:seconds"). If timelimit is None, the default value specified in the
`batch_adapter` entry of `manager.yml` is used.
"""
from .launcher import BatchLauncher
# Create a batch dir from the flow.workdir.
prev_dir = os.path.join(*self.workdir.split(os.path.sep)[:-1])
prev_dir = os.path.join(os.path.sep, prev_dir)
workdir = os.path.join(prev_dir, os.path.basename(self.workdir) + "_batch")
return BatchLauncher(workdir=workdir, flows=self).submit(timelimit=timelimit)
def make_light_tarfile(self, name=None):
"""Lightweight tarball file. Mainly used for debugging. Return the name of the tarball file."""
name = os.path.basename(self.workdir) + "-light.tar.gz" if name is None else name
return self.make_tarfile(name=name, exclude_dirs=["outdata", "indata", "tmpdata"])
def make_tarfile(self, name=None, max_filesize=None, exclude_exts=None, exclude_dirs=None, verbose=0, **kwargs):
"""
Create a tarball file.
Args:
name: Name of the tarball file. Set to os.path.basename(`flow.workdir`) + "tar.gz"` if name is None.
max_filesize (int or string with unit): a file is included in the tar file if its size <= max_filesize
Can be specified in bytes e.g. `max_files=1024` or with a string with unit e.g. `max_filesize="1 Mb"`.
No check is done if max_filesize is None.
exclude_exts: List of file extensions to be excluded from the tar file.
exclude_dirs: List of directory basenames to be excluded.
verbose (int): Verbosity level.
kwargs: keyword arguments passed to the :class:`TarFile` constructor.
Returns:
The name of the tarfile.
"""
def any2bytes(s):
"""Convert string or number to memory in bytes."""
if is_string(s):
return int(Memory.from_string(s).to("b"))
else:
return int(s)
if max_filesize is not None:
max_filesize = any2bytes(max_filesize)
if exclude_exts:
# Add/remove ".nc" so that we can simply pass "GSR" instead of "GSR.nc"
# Moreover this trick allows one to treat WFK.nc and WFK file on the same footing.
exts = []
for e in list_strings(exclude_exts):
exts.append(e)
if e.endswith(".nc"):
exts.append(e.replace(".nc", ""))
else:
exts.append(e + ".nc")
exclude_exts = exts
def filter(tarinfo):
"""
Function that takes a TarInfo object argument and returns the changed TarInfo object.
If it instead returns None the TarInfo object will be excluded from the archive.
"""
# Skip links.
if tarinfo.issym() or tarinfo.islnk():
if verbose: print("Excluding link: %s" % tarinfo.name)
return None
# Check size in bytes
if max_filesize is not None and tarinfo.size > max_filesize:
if verbose: print("Excluding %s due to max_filesize" % tarinfo.name)
return None
# Filter filenames.
if exclude_exts and any(tarinfo.name.endswith(ext) for ext in exclude_exts):
if verbose: print("Excluding %s due to extension" % tarinfo.name)
return None
# Exlude directories (use dir basenames).
if exclude_dirs and any(dir_name in exclude_dirs for dir_name in tarinfo.name.split(os.path.sep)):
if verbose: print("Excluding %s due to exclude_dirs" % tarinfo.name)
return None
return tarinfo
back = os.getcwd()
os.chdir(os.path.join(self.workdir, ".."))
import tarfile
name = os.path.basename(self.workdir) + ".tar.gz" if name is None else name
with tarfile.open(name=name, mode='w:gz', **kwargs) as tar:
tar.add(os.path.basename(self.workdir), arcname=None, recursive=True, exclude=None, filter=filter)
# Add the script used to generate the flow.
if self.pyfile is not None and os.path.exists(self.pyfile):
tar.add(self.pyfile)
os.chdir(back)
return name
#def abirobot(self, ext, check_status=True, nids=None):
# """
# Builds and return the :class:`Robot` subclass from the file extension `ext`.
# `nids` is an optional list of node identifiers used to filter the tasks in the flow.
# """
# from abipy.abilab import abirobot
# if check_status: self.check_status()
# return abirobot(flow=self, ext=ext, nids=nids):
@add_fig_kwargs
def plot_networkx(self, mode="network", with_edge_labels=False, ax=None,
node_size="num_cores", node_label="name_class", layout_type="spring", **kwargs):
"""
Use networkx to draw the flow with the connections among the nodes and
the status of the tasks.
Args:
mode: `networkx` to show connections, `status` to group tasks by status.
with_edge_labels: True to draw edge labels.
ax: matplotlib :class:`Axes` or None if a new figure should be created.
node_size: By default, the size of the node is proportional to the number of cores used.
node_label: By default, the task class is used to label node.
layout_type: Get positions for all nodes using `layout_type`. e.g. pos = nx.spring_layout(g)
.. warning::
Requires networkx package.
"""
if not self.allocated: self.allocate()
import networkx as nx
# Build the graph
g, edge_labels = nx.Graph(), {}
tasks = list(self.iflat_tasks())
for task in tasks:
g.add_node(task, name=task.name)
for child in task.get_children():
g.add_edge(task, child)
# TODO: Add getters! What about locked nodes!
i = [dep.node for dep in child.deps].index(task)
edge_labels[(task, child)] = " ".join(child.deps[i].exts)
# Get positions for all nodes using layout_type.
# e.g. pos = nx.spring_layout(g)
pos = getattr(nx, layout_type + "_layout")(g)
# Select function used to compute the size of the node
make_node_size = dict(num_cores=lambda task: 300 * task.manager.num_cores)[node_size]
# Select function used to build the label
make_node_label = dict(name_class=lambda task: task.pos_str + "\n" + task.__class__.__name__,)[node_label]
labels = {task: make_node_label(task) for task in g.nodes()}
ax, fig, plt = get_ax_fig_plt(ax=ax)
# Select plot type.
if mode == "network":
nx.draw_networkx(g, pos, labels=labels,
node_color=[task.color_rgb for task in g.nodes()],
node_size=[make_node_size(task) for task in g.nodes()],
width=1, style="dotted", with_labels=True, ax=ax)
# Draw edge labels
if with_edge_labels:
nx.draw_networkx_edge_labels(g, pos, edge_labels=edge_labels, ax=ax)
elif mode == "status":
# Group tasks by status.
for status in self.ALL_STATUS:
tasks = list(self.iflat_tasks(status=status))
# Draw nodes (color is given by status)
node_color = status.color_opts["color"]
if node_color is None: node_color = "black"
#print("num nodes %s with node_color %s" % (len(tasks), node_color))
nx.draw_networkx_nodes(g, pos,
nodelist=tasks,
node_color=node_color,
node_size=[make_node_size(task) for task in tasks],
alpha=0.5, ax=ax
#label=str(status),
)
# Draw edges.
nx.draw_networkx_edges(g, pos, width=2.0, alpha=0.5, arrows=True, ax=ax) # edge_color='r')
# Draw labels
nx.draw_networkx_labels(g, pos, labels, font_size=12, ax=ax)
# Draw edge labels
if with_edge_labels:
nx.draw_networkx_edge_labels(g, pos, edge_labels=edge_labels, ax=ax)
#label_pos=0.5, font_size=10, font_color='k', font_family='sans-serif', font_weight='normal',
# alpha=1.0, bbox=None, ax=None, rotate=True, **kwds)
else:
raise ValueError("Unknown value for mode: %s" % str(mode))
ax.axis("off")
return fig
class G0W0WithQptdmFlow(Flow):
def __init__(self, workdir, scf_input, nscf_input, scr_input, sigma_inputs, manager=None):
"""
Build a :class:`Flow` for one-shot G0W0 calculations.
The computation of the q-points for the screening is parallelized with qptdm
i.e. we run independent calculations for each q-point and then we merge the final results.
Args:
workdir: Working directory.
scf_input: Input for the GS SCF run.
nscf_input: Input for the NSCF run (band structure run).
scr_input: Input for the SCR run.
sigma_inputs: Input(s) for the SIGMA run(s).
manager: :class:`TaskManager` object used to submit the jobs
Initialized from manager.yml if manager is None.
"""
super(G0W0WithQptdmFlow, self).__init__(workdir, manager=manager)
# Register the first work (GS + NSCF calculation)
bands_work = self.register_work(BandStructureWork(scf_input, nscf_input))
# Register the callback that will be executed the work for the SCR with qptdm.
scr_work = self.register_work_from_cbk(cbk_name="cbk_qptdm_workflow", cbk_data={"input": scr_input},
deps={bands_work.nscf_task: "WFK"}, work_class=QptdmWork)
# The last work contains a list of SIGMA tasks
# that will use the data produced in the previous two works.
if not isinstance(sigma_inputs, (list, tuple)):
sigma_inputs = [sigma_inputs]
sigma_work = Work()
for sigma_input in sigma_inputs:
sigma_work.register_sigma_task(sigma_input, deps={bands_work.nscf_task: "WFK", scr_work: "SCR"})
self.register_work(sigma_work)
self.allocate()
def cbk_qptdm_workflow(self, cbk):
"""
This callback is executed by the flow when bands_work.nscf_task reaches S_OK.
It computes the list of q-points for the W(q,G,G'), creates nqpt tasks
in the second work (QptdmWork), and connect the signals.
"""
scr_input = cbk.data["input"]
# Use the WFK file produced by the second
# Task in the first Work (NSCF step).
nscf_task = self[0][1]
wfk_file = nscf_task.outdir.has_abiext("WFK")
work = self[1]
work.set_manager(self.manager)
work.create_tasks(wfk_file, scr_input)
work.add_deps(cbk.deps)
work.set_flow(self)
# Each task has a reference to its work.
for task in work:
task.set_work(work)
# Add the garbage collector.
if self.gc is not None: task.set_gc(self.gc)
work.connect_signals()
work.build()
return work
class FlowCallbackError(Exception):
"""Exceptions raised by FlowCallback."""
class FlowCallback(object):
"""
This object implements the callbacks executed by the :class:`flow` when
particular conditions are fulfilled. See on_dep_ok method of :class:`Flow`.
.. note::
I decided to implement callbacks via this object instead of a standard
approach based on bound methods because:
1) pickle (v<=3) does not support the pickling/unplickling of bound methods
2) There's some extra logic and extra data needed for the proper functioning
of a callback at the flow level and this object provides an easy-to-use interface.
"""
Error = FlowCallbackError
def __init__(self, func_name, flow, deps, cbk_data):
"""
Args:
func_name: String with the name of the callback to execute.
func_name must be a bound method of flow with signature:
func_name(self, cbk)
where self is the Flow instance and cbk is the callback
flow: Reference to the :class:`Flow`
deps: List of dependencies associated to the callback
The callback is executed when all dependencies reach S_OK.
cbk_data: Dictionary with additional data that will be passed to the callback via self.
"""
self.func_name = func_name
self.flow = flow
self.deps = deps
self.data = cbk_data or {}
self._disabled = False
def __str__(self):
return "%s: %s bound to %s" % (self.__class__.__name__, self.func_name, self.flow)
def __call__(self):
"""Execute the callback."""
if self.can_execute():
# Get the bound method of the flow from func_name.
# We use this trick because pickle (format <=3) does not support bound methods.
try:
func = getattr(self.flow, self.func_name)
except AttributeError as exc:
raise self.Error(str(exc))
return func(self)
else:
raise self.Error("You tried to __call_ a callback that cannot be executed!")
def can_execute(self):
"""True if we can execute the callback."""
return not self._disabled and all(dep.status == dep.node.S_OK for dep in self.deps)
def disable(self):
"""
True if the callback has been disabled. This usually happens when the callback has been executed.
"""
self._disabled = True
def enable(self):
"""Enable the callback"""
self._disabled = False
def handle_sender(self, sender):
"""
True if the callback is associated to the sender
i.e. if the node who sent the signal appears in the
dependencies of the callback.
"""
return sender in [d.node for d in self.deps]
# Factory functions.
def bandstructure_flow(workdir, scf_input, nscf_input, dos_inputs=None, manager=None, flow_class=Flow, allocate=True):
"""
Build a :class:`Flow` for band structure calculations.
Args:
workdir: Working directory.
scf_input: Input for the GS SCF run.
nscf_input: Input for the NSCF run (band structure run).
dos_inputs: Input(s) for the NSCF run (dos run).
manager: :class:`TaskManager` object used to submit the jobs
Initialized from manager.yml if manager is None.
flow_class: Flow subclass
allocate: True if the flow should be allocated before returning.
Returns:
:class:`Flow` object
"""
flow = flow_class(workdir, manager=manager)
work = BandStructureWork(scf_input, nscf_input, dos_inputs=dos_inputs)
flow.register_work(work)
# Handy aliases
flow.scf_task, flow.nscf_task, flow.dos_tasks = work.scf_task, work.nscf_task, work.dos_tasks
if allocate: flow.allocate()
return flow
def g0w0_flow(workdir, scf_input, nscf_input, scr_input, sigma_inputs, manager=None, flow_class=Flow, allocate=True):
"""
Build a :class:`Flow` for one-shot $G_0W_0$ calculations.
Args:
workdir: Working directory.
scf_input: Input for the GS SCF run.
nscf_input: Input for the NSCF run (band structure run).
scr_input: Input for the SCR run.
sigma_inputs: List of inputs for the SIGMA run.
flow_class: Flow class
manager: :class:`TaskManager` object used to submit the jobs.
Initialized from manager.yml if manager is None.
allocate: True if the flow should be allocated before returning.
Returns:
:class:`Flow` object
"""
flow = flow_class(workdir, manager=manager)
work = G0W0Work(scf_input, nscf_input, scr_input, sigma_inputs)
flow.register_work(work)
if allocate: flow.allocate()
return flow
class PhononFlow(Flow):
"""
1) One workflow for the GS run.
2) nqpt works for phonon calculations. Each work contains
nirred tasks where nirred is the number of irreducible phonon perturbations
for that particular q-point.
"""
@classmethod
def from_scf_input(cls, workdir, scf_input, ph_ngqpt, with_becs=True, manager=None, allocate=True):
"""
Create a `PhononFlow` for phonon calculations from an `AbinitInput` defining a ground-state run.
Args:
workdir: Working directory of the flow.
scf_input: :class:`AbinitInput` object with the parameters for the GS-SCF run.
ph_ngqpt: q-mesh for phonons. Must be a sub-mesh of the k-mesh used for
electrons. e.g if ngkpt = (8, 8, 8). ph_ngqpt = (4, 4, 4) is a valid choice
whereas ph_ngqpt = (3, 3, 3) is not!
with_becs: True if Born effective charges are wanted.
manager: :class:`TaskManager` object. Read from `manager.yml` if None.
allocate: True if the flow should be allocated before returning.
Return:
:class:`PhononFlow` object.
"""
flow = cls(workdir, manager=manager)
# Register the SCF task
flow.register_scf_task(scf_input)
scf_task = flow[0][0]
# Make sure k-mesh and q-mesh are compatible.
scf_ngkpt, ph_ngqpt = np.array(scf_input["ngkpt"]), np.array(ph_ngqpt)
if any(scf_ngkpt % ph_ngqpt != 0):
raise ValueError("ph_ngqpt %s should be a sub-mesh of scf_ngkpt %s" % (ph_ngqpt, scf_ngkpt))
# Get the q-points in the IBZ from Abinit
qpoints = scf_input.abiget_ibz(ngkpt=ph_ngqpt, shiftk=(0,0,0), kptopt=1).points
# Create a PhononWork for each q-point. Add DDK and E-field if q == Gamma and with_becs.
for qpt in qpoints:
if np.allclose(qpt, 0) and with_becs:
ph_work = BecWork.from_scf_task(scf_task)
else:
ph_work = PhononWork.from_scf_task(scf_task, qpoints=qpt)
flow.register_work(ph_work)
if allocate: flow.allocate()
return flow
def open_final_ddb(self):
"""
Open the DDB file located in the output directory of the flow.
Return:
:class:`DdbFile` object, None if file could not be found or file is not readable.
"""
ddb_path = self.outdir.has_abiext("DDB")
if not ddb_path:
if self.status == self.S_OK:
logger.critical("%s reached S_OK but didn't produce a GSR file in %s" % (self, self.outdir))
return None
from abipy.dfpt.ddb import DdbFile
try:
return DdbFile(ddb_path)
except Exception as exc:
logger.critical("Exception while reading DDB file at %s:\n%s" % (ddb_path, str(exc)))
return None
def finalize(self):
"""This method is called when the flow is completed."""
# Merge all the out_DDB files found in work.outdir.
ddb_files = list(filter(None, [work.outdir.has_abiext("DDB") for work in self]))
# Final DDB file will be produced in the outdir of the work.
out_ddb = self.outdir.path_in("out_DDB")
desc = "DDB file merged by %s on %s" % (self.__class__.__name__, time.asctime())
mrgddb = wrappers.Mrgddb(manager=self.manager, verbose=0)
mrgddb.merge(self.outdir.path, ddb_files, out_ddb=out_ddb, description=desc)
print("Final DDB file available at %s" % out_ddb)
# Call the method of the super class.
retcode = super(PhononFlow, self).finalize()
#print("retcode", retcode)
#if retcode != 0: return retcode
return retcode
class NonLinearCoeffFlow(Flow):
"""
1) One workflow for the GS run.
2) nqpt works for electric field calculations. Each work contains
nirred tasks where nirred is the number of irreducible perturbations
for that particular q-point.
"""
@classmethod
def from_scf_input(cls, workdir, scf_input, manager=None, allocate=True):
"""
Create a `NonlinearFlow` for second order susceptibility calculations from an `AbinitInput` defining a ground-state run.
Args:
workdir: Working directory of the flow.
scf_input: :class:`AbinitInput` object with the parameters for the GS-SCF run.
manager: :class:`TaskManager` object. Read from `manager.yml` if None.
allocate: True if the flow should be allocated before returning.
Return:
:class:`NonlinearFlow` object.
"""
flow = cls(workdir, manager=manager)
flow.register_scf_task(scf_input)
scf_task = flow[0][0]
nl_work = DteWork.from_scf_task(scf_task)
flow.register_work(nl_work)
if allocate: flow.allocate()
return flow
def open_final_ddb(self):
"""
Open the DDB file located in the output directory of the flow.
Return:
:class:`DdbFile` object, None if file could not be found or file is not readable.
"""
ddb_path = self.outdir.has_abiext("DDB")
if not ddb_path:
if self.status == self.S_OK:
logger.critical("%s reached S_OK but didn't produce a GSR file in %s" % (self, self.outdir))
return None
from abipy.dfpt.ddb import DdbFile
try:
return DdbFile(ddb_path)
except Exception as exc:
logger.critical("Exception while reading DDB file at %s:\n%s" % (ddb_path, str(exc)))
return None
def finalize(self):
"""This method is called when the flow is completed."""
# Merge all the out_DDB files found in work.outdir.
ddb_files = list(filter(None, [work.outdir.has_abiext("DDB") for work in self]))
# Final DDB file will be produced in the outdir of the work.
out_ddb = self.outdir.path_in("out_DDB")
desc = "DDB file merged by %s on %s" % (self.__class__.__name__, time.asctime())
mrgddb = wrappers.Mrgddb(manager=self.manager, verbose=0)
mrgddb.merge(self.outdir.path, ddb_files, out_ddb=out_ddb, description=desc)
print("Final DDB file available at %s" % out_ddb)
# Call the method of the super class.
retcode = super(NonLinearCoeffFlow, self).finalize()
print("retcode", retcode)
#if retcode != 0: return retcode
return retcode
# Alias for compatibility reasons. For the time being, DO NOT REMOVE
nonlinear_coeff_flow = NonLinearCoeffFlow
def phonon_flow(workdir, scf_input, ph_inputs, with_nscf=False, with_ddk=False, with_dde=False,
manager=None, flow_class=PhononFlow, allocate=True):
"""
Build a :class:`PhononFlow` for phonon calculations.
Args:
workdir: Working directory.
scf_input: Input for the GS SCF run.
ph_inputs: List of Inputs for the phonon runs.
with_nscf: add an nscf task in front of al phonon tasks to make sure the q point is covered
with_ddk: add the ddk step
with_dde: add the dde step it the dde is set ddk is switched on automatically
manager: :class:`TaskManager` used to submit the jobs
Initialized from manager.yml if manager is None.
flow_class: Flow class
Returns:
:class:`Flow` object
"""
logger.critical("phonon_flow is deprecated and could give wrong results")
if with_dde:
with_ddk = True
natom = len(scf_input.structure)
# Create the container that will manage the different works.
flow = flow_class(workdir, manager=manager)
# Register the first work (GS calculation)
# register_task creates a work for the task, registers it to the flow and returns the work
# the 0the element of the work is the task
scf_task = flow.register_task(scf_input, task_class=ScfTask)[0]
# Build a temporary work with a shell manager just to run
# ABINIT to get the list of irreducible pertubations for this q-point.
shell_manager = flow.manager.to_shell_manager(mpi_procs=1)
if with_ddk:
logger.info('add ddk')
# TODO
# MG Warning: be careful here because one should use tolde or tolwfr (tolvrs shall not be used!)
ddk_input = ph_inputs[0].deepcopy()
ddk_input.set_vars(qpt=[0, 0, 0], rfddk=1, rfelfd=2, rfdir=[1, 1, 1])
ddk_task = flow.register_task(ddk_input, deps={scf_task: 'WFK'}, task_class=DdkTask)[0]
if with_dde:
logger.info('add dde')
dde_input = ph_inputs[0].deepcopy()
dde_input.set_vars(qpt=[0, 0, 0], rfddk=1, rfelfd=2)
dde_input_idir = dde_input.deepcopy()
dde_input_idir.set_vars(rfdir=[1, 1, 1])
dde_task = flow.register_task(dde_input, deps={scf_task: 'WFK', ddk_task: 'DDK'}, task_class=DdeTask)[0]
if not isinstance(ph_inputs, (list, tuple)):
ph_inputs = [ph_inputs]
for i, ph_input in enumerate(ph_inputs):
fake_input = ph_input.deepcopy()
# Run abinit on the front-end to get the list of irreducible pertubations.
tmp_dir = os.path.join(workdir, "__ph_run" + str(i) + "__")
w = PhononWork(workdir=tmp_dir, manager=shell_manager)
fake_task = w.register(fake_input)
# Use the magic value paral_rf = -1 to get the list of irreducible perturbations for this q-point.
abivars = dict(
paral_rf=-1,
rfatpol=[1, natom], # Set of atoms to displace.
rfdir=[1, 1, 1], # Along this set of reduced coordinate axis.
)
fake_task.set_vars(abivars)
w.allocate()
w.start(wait=True)
# Parse the file to get the perturbations.
try:
irred_perts = yaml_read_irred_perts(fake_task.log_file.path)
except:
print("Error in %s" % fake_task.log_file.path)
raise
logger.info(irred_perts)
w.rmtree()
# Now we can build the final list of works:
# One work per q-point, each work computes all
# the irreducible perturbations for a singe q-point.
work_qpt = PhononWork()
if with_nscf:
# MG: Warning this code assume 0 is Gamma!
nscf_input = copy.deepcopy(scf_input)
nscf_input.set_vars(kptopt=3, iscf=-3, qpt=irred_perts[0]['qpt'], nqpt=1)
nscf_task = work_qpt.register_nscf_task(nscf_input, deps={scf_task: "DEN"})
deps = {nscf_task: "WFQ", scf_task: "WFK"}
else:
deps = {scf_task: "WFK"}
if with_ddk:
deps[ddk_task] = 'DDK'
logger.info(irred_perts[0]['qpt'])
for irred_pert in irred_perts:
#print(irred_pert)
new_input = ph_input.deepcopy()
#rfatpol 1 1 # Only the first atom is displaced
#rfdir 1 0 0 # Along the first reduced coordinate axis
qpt = irred_pert["qpt"]
idir = irred_pert["idir"]
ipert = irred_pert["ipert"]
# TODO this will work for phonons, but not for the other types of perturbations.
rfdir = 3 * [0]
rfdir[idir -1] = 1
rfatpol = [ipert, ipert]
new_input.set_vars(
#rfpert=1,
qpt=qpt,
rfdir=rfdir,
rfatpol=rfatpol,
)
if with_ddk:
new_input.set_vars(rfelfd=3)
work_qpt.register_phonon_task(new_input, deps=deps)
flow.register_work(work_qpt)
if allocate: flow.allocate()
return flow
def phonon_conv_flow(workdir, scf_input, qpoints, params, manager=None, allocate=True):
"""
Create a :class:`Flow` to perform convergence studies for phonon calculations.
Args:
workdir: Working directory of the flow.
scf_input: :class:`AbinitInput` object defining a GS-SCF calculation.
qpoints: List of list of lists with the reduced coordinates of the q-point(s).
params:
To perform a converge study wrt ecut: params=["ecut", [2, 4, 6]]
manager: :class:`TaskManager` object responsible for the submission of the jobs.
If manager is None, the object is initialized from the yaml file
located either in the working directory or in the user configuration dir.
allocate: True if the flow should be allocated before returning.
Return:
:class:`Flow` object.
"""
qpoints = np.reshape(qpoints, (-1, 3))
flow = Flow(workdir=workdir, manager=manager)
for qpt in qpoints:
for gs_inp in scf_input.product(*params):
# Register the SCF task
work = flow.register_scf_task(gs_inp)
# Add the PhononWork connected to this scf_task.
flow.register_work(PhononWork.from_scf_task(work[0], qpoints=qpt))
if allocate: flow.allocate()
return flow
| mit |
Titan-C/scikit-learn | examples/ensemble/plot_random_forest_regression_multioutput.py | 46 | 2640 | """
============================================================
Comparing random forests and the multi-output meta estimator
============================================================
An example to compare multi-output regression with random forest and
the :ref:`multioutput.MultiOutputRegressor <multiclass>` meta-estimator.
This example illustrates the use of the
:ref:`multioutput.MultiOutputRegressor <multiclass>` meta-estimator
to perform multi-output regression. A random forest regressor is used,
which supports multi-output regression natively, so the results can be
compared.
The random forest regressor will only ever predict values within the
range of observations or closer to zero for each of the targets. As a
result the predictions are biased towards the centre of the circle.
Using a single underlying feature the model learns both the
x and y coordinate as output.
"""
print(__doc__)
# Author: Tim Head <[email protected]>
#
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.multioutput import MultiOutputRegressor
# Create a random dataset
rng = np.random.RandomState(1)
X = np.sort(200 * rng.rand(600, 1) - 100, axis=0)
y = np.array([np.pi * np.sin(X).ravel(), np.pi * np.cos(X).ravel()]).T
y += (0.5 - rng.rand(*y.shape))
X_train, X_test, y_train, y_test = train_test_split(X, y,
train_size=400,
random_state=4)
max_depth = 30
regr_multirf = MultiOutputRegressor(RandomForestRegressor(max_depth=max_depth,
random_state=0))
regr_multirf.fit(X_train, y_train)
regr_rf = RandomForestRegressor(max_depth=max_depth, random_state=2)
regr_rf.fit(X_train, y_train)
# Predict on new data
y_multirf = regr_multirf.predict(X_test)
y_rf = regr_rf.predict(X_test)
# Plot the results
plt.figure()
s = 50
a = 0.4
plt.scatter(y_test[:, 0], y_test[:, 1],
c="navy", s=s, marker="s", alpha=a, label="Data")
plt.scatter(y_multirf[:, 0], y_multirf[:, 1],
c="cornflowerblue", s=s, alpha=a,
label="Multi RF score=%.2f" % regr_multirf.score(X_test, y_test))
plt.scatter(y_rf[:, 0], y_rf[:, 1],
c="c", s=s, marker="^", alpha=a,
label="RF score=%.2f" % regr_rf.score(X_test, y_test))
plt.xlim([-6, 6])
plt.ylim([-6, 6])
plt.xlabel("target 1")
plt.ylabel("target 2")
plt.title("Comparing random forests and the multi-output meta estimator")
plt.legend()
plt.show()
| bsd-3-clause |
poojavade/Genomics_Docker | Dockerfiles/gedlab-khmer-filter-abund/pymodules/python2.7/lib/python/statsmodels-0.5.0-py2.7-linux-x86_64.egg/statsmodels/tsa/base/tests/test_datetools.py | 3 | 6363 | from datetime import datetime
import numpy.testing as npt
from statsmodels.tsa.base.datetools import (_date_from_idx,
_idx_from_dates, date_parser, date_range_str, dates_from_str,
dates_from_range, _infer_freq, _freq_to_pandas)
def test_date_from_idx():
d1 = datetime(2008, 12, 31)
idx = 15
npt.assert_equal(_date_from_idx(d1, idx, 'Q'), datetime(2012, 9, 30))
npt.assert_equal(_date_from_idx(d1, idx, 'A'), datetime(2023, 12, 31))
npt.assert_equal(_date_from_idx(d1, idx, 'B'), datetime(2009, 1, 21))
npt.assert_equal(_date_from_idx(d1, idx, 'D'), datetime(2009, 1, 15))
npt.assert_equal(_date_from_idx(d1, idx, 'W'), datetime(2009, 4, 12))
npt.assert_equal(_date_from_idx(d1, idx, 'M'), datetime(2010, 3, 31))
def test_idx_from_date():
d1 = datetime(2008, 12, 31)
idx = 15
npt.assert_equal(_idx_from_dates(d1, datetime(2012, 9, 30), 'Q'), idx)
npt.assert_equal(_idx_from_dates(d1, datetime(2023, 12, 31), 'A'), idx)
npt.assert_equal(_idx_from_dates(d1, datetime(2009, 1, 21), 'B'), idx)
npt.assert_equal(_idx_from_dates(d1, datetime(2009, 1, 15), 'D'), idx)
# move d1 and d2 forward to end of week
npt.assert_equal(_idx_from_dates(datetime(2009, 1, 4),
datetime(2009, 4, 17), 'W'), idx-1)
npt.assert_equal(_idx_from_dates(d1, datetime(2010, 3, 31), 'M'), idx)
def test_regex_matching_month():
t1 = "1999m4"
t2 = "1999:m4"
t3 = "1999:mIV"
t4 = "1999mIV"
result = datetime(1999, 4, 30)
npt.assert_equal(date_parser(t1), result)
npt.assert_equal(date_parser(t2), result)
npt.assert_equal(date_parser(t3), result)
npt.assert_equal(date_parser(t4), result)
def test_regex_matching_quarter():
t1 = "1999q4"
t2 = "1999:q4"
t3 = "1999:qIV"
t4 = "1999qIV"
result = datetime(1999, 12, 31)
npt.assert_equal(date_parser(t1), result)
npt.assert_equal(date_parser(t2), result)
npt.assert_equal(date_parser(t3), result)
npt.assert_equal(date_parser(t4), result)
def test_dates_from_range():
results = [datetime(1959, 3, 31, 0, 0),
datetime(1959, 6, 30, 0, 0),
datetime(1959, 9, 30, 0, 0),
datetime(1959, 12, 31, 0, 0),
datetime(1960, 3, 31, 0, 0),
datetime(1960, 6, 30, 0, 0),
datetime(1960, 9, 30, 0, 0),
datetime(1960, 12, 31, 0, 0),
datetime(1961, 3, 31, 0, 0),
datetime(1961, 6, 30, 0, 0),
datetime(1961, 9, 30, 0, 0),
datetime(1961, 12, 31, 0, 0),
datetime(1962, 3, 31, 0, 0),
datetime(1962, 6, 30, 0, 0)]
dt_range = dates_from_range('1959q1', '1962q2')
npt.assert_(results == dt_range)
# test with starting period not the first with length
results = results[2:]
dt_range = dates_from_range('1959q3', length=len(results))
npt.assert_(results == dt_range)
# check month
results = [datetime(1959, 3, 31, 0, 0),
datetime(1959, 4, 30, 0, 0),
datetime(1959, 5, 31, 0, 0),
datetime(1959, 6, 30, 0, 0),
datetime(1959, 7, 31, 0, 0),
datetime(1959, 8, 31, 0, 0),
datetime(1959, 9, 30, 0, 0),
datetime(1959, 10, 31, 0, 0),
datetime(1959, 11, 30, 0, 0),
datetime(1959, 12, 31, 0, 0),
datetime(1960, 1, 31, 0, 0),
datetime(1960, 2, 28, 0, 0),
datetime(1960, 3, 31, 0, 0),
datetime(1960, 4, 30, 0, 0),
datetime(1960, 5, 31, 0, 0),
datetime(1960, 6, 30, 0, 0),
datetime(1960, 7, 31, 0, 0),
datetime(1960, 8, 31, 0, 0),
datetime(1960, 9, 30, 0, 0),
datetime(1960, 10, 31, 0, 0),
datetime(1960, 12, 31, 0, 0),
datetime(1961, 1, 31, 0, 0),
datetime(1961, 2, 28, 0, 0),
datetime(1961, 3, 31, 0, 0),
datetime(1961, 4, 30, 0, 0),
datetime(1961, 5, 31, 0, 0),
datetime(1961, 6, 30, 0, 0),
datetime(1961, 7, 31, 0, 0),
datetime(1961, 8, 31, 0, 0),
datetime(1961, 9, 30, 0, 0),
datetime(1961, 10, 31, 0, 0)]
dt_range = dates_from_range("1959m3", length=len(results))
try:
from pandas import DatetimeIndex
_pandas_08x = True
except ImportError, err:
_pandas_08x = False
d1 = datetime(2008, 12, 31)
d2 = datetime(2012, 9, 30)
def test_infer_freq():
d1 = datetime(2008, 12, 31)
d2 = datetime(2012, 9, 30)
if _pandas_08x:
b = DatetimeIndex(start=d1, end=d2, freq=_freq_to_pandas['B']).values
d = DatetimeIndex(start=d1, end=d2, freq=_freq_to_pandas['D']).values
w = DatetimeIndex(start=d1, end=d2, freq=_freq_to_pandas['W']).values
m = DatetimeIndex(start=d1, end=d2, freq=_freq_to_pandas['M']).values
a = DatetimeIndex(start=d1, end=d2, freq=_freq_to_pandas['A']).values
q = DatetimeIndex(start=d1, end=d2, freq=_freq_to_pandas['Q']).values
assert _infer_freq(w) == 'W-SUN'
assert _infer_freq(a) == 'A-DEC'
assert _infer_freq(q) == 'Q-DEC'
assert _infer_freq(w[:3]) == 'W-SUN'
assert _infer_freq(a[:3]) == 'A-DEC'
assert _infer_freq(q[:3]) == 'Q-DEC'
else:
from pandas import DateRange
b = DateRange(d1, d2, offset=_freq_to_pandas['B']).values
d = DateRange(d1, d2, offset=_freq_to_pandas['D']).values
w = DateRange(d1, d2, offset=_freq_to_pandas['W']).values
m = DateRange(d1, d2, offset=_freq_to_pandas['M']).values
a = DateRange(d1, d2, offset=_freq_to_pandas['A']).values
q = DateRange(d1, d2, offset=_freq_to_pandas['Q']).values
assert _infer_freq(w) == 'W'
assert _infer_freq(a) == 'A'
assert _infer_freq(q) == 'Q'
assert _infer_freq(w[:3]) == 'W'
assert _infer_freq(a[:3]) == 'A'
assert _infer_freq(q[:3]) == 'Q'
assert _infer_freq(b[2:5]) == 'B'
assert _infer_freq(b[:3]) == 'D'
assert _infer_freq(b) == 'B'
assert _infer_freq(d) == 'D'
assert _infer_freq(m) == 'M'
assert _infer_freq(d[:3]) == 'D'
assert _infer_freq(m[:3]) == 'M'
| apache-2.0 |
AdaptiveApplications/carnegie | tarc_bus_locator_client/numpy-1.8.1/numpy/lib/function_base.py | 9 | 113795 | from __future__ import division, absolute_import, print_function
__docformat__ = "restructuredtext en"
__all__ = [
'select', 'piecewise', 'trim_zeros', 'copy', 'iterable', 'percentile',
'diff', 'gradient', 'angle', 'unwrap', 'sort_complex', 'disp',
'extract', 'place', 'vectorize', 'asarray_chkfinite', 'average',
'histogram', 'histogramdd', 'bincount', 'digitize', 'cov', 'corrcoef',
'msort', 'median', 'sinc', 'hamming', 'hanning', 'bartlett',
'blackman', 'kaiser', 'trapz', 'i0', 'add_newdoc', 'add_docstring',
'meshgrid', 'delete', 'insert', 'append', 'interp', 'add_newdoc_ufunc']
import warnings
import types
import sys
import numpy.core.numeric as _nx
from numpy.core import linspace
from numpy.core.numeric import ones, zeros, arange, concatenate, array, \
asarray, asanyarray, empty, empty_like, ndarray, around
from numpy.core.numeric import ScalarType, dot, where, newaxis, intp, \
integer, isscalar
from numpy.core.umath import pi, multiply, add, arctan2, \
frompyfunc, isnan, cos, less_equal, sqrt, sin, mod, exp, log10
from numpy.core.fromnumeric import ravel, nonzero, choose, sort, partition, mean
from numpy.core.numerictypes import typecodes, number
from numpy.core import atleast_1d, atleast_2d
from numpy.lib.twodim_base import diag
from ._compiled_base import _insert, add_docstring
from ._compiled_base import digitize, bincount, interp as compiled_interp
from .utils import deprecate
from ._compiled_base import add_newdoc_ufunc
import numpy as np
import collections
from numpy.compat import long
# Force range to be a generator, for np.delete's usage.
if sys.version_info[0] < 3:
range = xrange
def iterable(y):
"""
Check whether or not an object can be iterated over.
Parameters
----------
y : object
Input object.
Returns
-------
b : {0, 1}
Return 1 if the object has an iterator method or is a sequence,
and 0 otherwise.
Examples
--------
>>> np.iterable([1, 2, 3])
1
>>> np.iterable(2)
0
"""
try: iter(y)
except: return 0
return 1
def histogram(a, bins=10, range=None, normed=False, weights=None, density=None):
"""
Compute the histogram of a set of data.
Parameters
----------
a : array_like
Input data. The histogram is computed over the flattened array.
bins : int or sequence of scalars, optional
If `bins` is an int, it defines the number of equal-width
bins in the given range (10, by default). If `bins` is a sequence,
it defines the bin edges, including the rightmost edge, allowing
for non-uniform bin widths.
range : (float, float), optional
The lower and upper range of the bins. If not provided, range
is simply ``(a.min(), a.max())``. Values outside the range are
ignored.
normed : bool, optional
This keyword is deprecated in Numpy 1.6 due to confusing/buggy
behavior. It will be removed in Numpy 2.0. Use the density keyword
instead.
If False, the result will contain the number of samples
in each bin. If True, the result is the value of the
probability *density* function at the bin, normalized such that
the *integral* over the range is 1. Note that this latter behavior is
known to be buggy with unequal bin widths; use `density` instead.
weights : array_like, optional
An array of weights, of the same shape as `a`. Each value in `a`
only contributes its associated weight towards the bin count
(instead of 1). If `normed` is True, the weights are normalized,
so that the integral of the density over the range remains 1
density : bool, optional
If False, the result will contain the number of samples
in each bin. If True, the result is the value of the
probability *density* function at the bin, normalized such that
the *integral* over the range is 1. Note that the sum of the
histogram values will not be equal to 1 unless bins of unity
width are chosen; it is not a probability *mass* function.
Overrides the `normed` keyword if given.
Returns
-------
hist : array
The values of the histogram. See `normed` and `weights` for a
description of the possible semantics.
bin_edges : array of dtype float
Return the bin edges ``(length(hist)+1)``.
See Also
--------
histogramdd, bincount, searchsorted, digitize
Notes
-----
All but the last (righthand-most) bin is half-open. In other words, if
`bins` is::
[1, 2, 3, 4]
then the first bin is ``[1, 2)`` (including 1, but excluding 2) and the
second ``[2, 3)``. The last bin, however, is ``[3, 4]``, which *includes*
4.
Examples
--------
>>> np.histogram([1, 2, 1], bins=[0, 1, 2, 3])
(array([0, 2, 1]), array([0, 1, 2, 3]))
>>> np.histogram(np.arange(4), bins=np.arange(5), density=True)
(array([ 0.25, 0.25, 0.25, 0.25]), array([0, 1, 2, 3, 4]))
>>> np.histogram([[1, 2, 1], [1, 0, 1]], bins=[0,1,2,3])
(array([1, 4, 1]), array([0, 1, 2, 3]))
>>> a = np.arange(5)
>>> hist, bin_edges = np.histogram(a, density=True)
>>> hist
array([ 0.5, 0. , 0.5, 0. , 0. , 0.5, 0. , 0.5, 0. , 0.5])
>>> hist.sum()
2.4999999999999996
>>> np.sum(hist*np.diff(bin_edges))
1.0
"""
a = asarray(a)
if weights is not None:
weights = asarray(weights)
if np.any(weights.shape != a.shape):
raise ValueError(
'weights should have the same shape as a.')
weights = weights.ravel()
a = a.ravel()
if (range is not None):
mn, mx = range
if (mn > mx):
raise AttributeError(
'max must be larger than min in range parameter.')
if not iterable(bins):
if np.isscalar(bins) and bins < 1:
raise ValueError("`bins` should be a positive integer.")
if range is None:
if a.size == 0:
# handle empty arrays. Can't determine range, so use 0-1.
range = (0, 1)
else:
range = (a.min(), a.max())
mn, mx = [mi+0.0 for mi in range]
if mn == mx:
mn -= 0.5
mx += 0.5
bins = linspace(mn, mx, bins+1, endpoint=True)
else:
bins = asarray(bins)
if (np.diff(bins) < 0).any():
raise AttributeError(
'bins must increase monotonically.')
# Histogram is an integer or a float array depending on the weights.
if weights is None:
ntype = int
else:
ntype = weights.dtype
n = np.zeros(bins.shape, ntype)
block = 65536
if weights is None:
for i in arange(0, len(a), block):
sa = sort(a[i:i+block])
n += np.r_[sa.searchsorted(bins[:-1], 'left'), \
sa.searchsorted(bins[-1], 'right')]
else:
zero = array(0, dtype=ntype)
for i in arange(0, len(a), block):
tmp_a = a[i:i+block]
tmp_w = weights[i:i+block]
sorting_index = np.argsort(tmp_a)
sa = tmp_a[sorting_index]
sw = tmp_w[sorting_index]
cw = np.concatenate(([zero,], sw.cumsum()))
bin_index = np.r_[sa.searchsorted(bins[:-1], 'left'), \
sa.searchsorted(bins[-1], 'right')]
n += cw[bin_index]
n = np.diff(n)
if density is not None:
if density:
db = array(np.diff(bins), float)
return n/db/n.sum(), bins
else:
return n, bins
else:
# deprecated, buggy behavior. Remove for Numpy 2.0
if normed:
db = array(np.diff(bins), float)
return n/(n*db).sum(), bins
else:
return n, bins
def histogramdd(sample, bins=10, range=None, normed=False, weights=None):
"""
Compute the multidimensional histogram of some data.
Parameters
----------
sample : array_like
The data to be histogrammed. It must be an (N,D) array or data
that can be converted to such. The rows of the resulting array
are the coordinates of points in a D dimensional polytope.
bins : sequence or int, optional
The bin specification:
* A sequence of arrays describing the bin edges along each dimension.
* The number of bins for each dimension (nx, ny, ... =bins)
* The number of bins for all dimensions (nx=ny=...=bins).
range : sequence, optional
A sequence of lower and upper bin edges to be used if the edges are
not given explicitly in `bins`. Defaults to the minimum and maximum
values along each dimension.
normed : bool, optional
If False, returns the number of samples in each bin. If True,
returns the bin density ``bin_count / sample_count / bin_volume``.
weights : array_like (N,), optional
An array of values `w_i` weighing each sample `(x_i, y_i, z_i, ...)`.
Weights are normalized to 1 if normed is True. If normed is False,
the values of the returned histogram are equal to the sum of the
weights belonging to the samples falling into each bin.
Returns
-------
H : ndarray
The multidimensional histogram of sample x. See normed and weights
for the different possible semantics.
edges : list
A list of D arrays describing the bin edges for each dimension.
See Also
--------
histogram: 1-D histogram
histogram2d: 2-D histogram
Examples
--------
>>> r = np.random.randn(100,3)
>>> H, edges = np.histogramdd(r, bins = (5, 8, 4))
>>> H.shape, edges[0].size, edges[1].size, edges[2].size
((5, 8, 4), 6, 9, 5)
"""
try:
# Sample is an ND-array.
N, D = sample.shape
except (AttributeError, ValueError):
# Sample is a sequence of 1D arrays.
sample = atleast_2d(sample).T
N, D = sample.shape
nbin = empty(D, int)
edges = D*[None]
dedges = D*[None]
if weights is not None:
weights = asarray(weights)
try:
M = len(bins)
if M != D:
raise AttributeError(
'The dimension of bins must be equal'\
' to the dimension of the sample x.')
except TypeError:
# bins is an integer
bins = D*[bins]
# Select range for each dimension
# Used only if number of bins is given.
if range is None:
# Handle empty input. Range can't be determined in that case, use 0-1.
if N == 0:
smin = zeros(D)
smax = ones(D)
else:
smin = atleast_1d(array(sample.min(0), float))
smax = atleast_1d(array(sample.max(0), float))
else:
smin = zeros(D)
smax = zeros(D)
for i in arange(D):
smin[i], smax[i] = range[i]
# Make sure the bins have a finite width.
for i in arange(len(smin)):
if smin[i] == smax[i]:
smin[i] = smin[i] - .5
smax[i] = smax[i] + .5
# Create edge arrays
for i in arange(D):
if isscalar(bins[i]):
if bins[i] < 1:
raise ValueError("Element at index %s in `bins` should be "
"a positive integer." % i)
nbin[i] = bins[i] + 2 # +2 for outlier bins
edges[i] = linspace(smin[i], smax[i], nbin[i]-1)
else:
edges[i] = asarray(bins[i], float)
nbin[i] = len(edges[i])+1 # +1 for outlier bins
dedges[i] = diff(edges[i])
if np.any(np.asarray(dedges[i]) <= 0):
raise ValueError("""
Found bin edge of size <= 0. Did you specify `bins` with
non-monotonic sequence?""")
nbin = asarray(nbin)
# Handle empty input.
if N == 0:
return np.zeros(nbin-2), edges
# Compute the bin number each sample falls into.
Ncount = {}
for i in arange(D):
Ncount[i] = digitize(sample[:, i], edges[i])
# Using digitize, values that fall on an edge are put in the right bin.
# For the rightmost bin, we want values equal to the right
# edge to be counted in the last bin, and not as an outlier.
for i in arange(D):
# Rounding precision
mindiff = dedges[i].min()
if not np.isinf(mindiff):
decimal = int(-log10(mindiff)) + 6
# Find which points are on the rightmost edge.
not_smaller_than_edge = (sample[:, i] >= edges[i][-1])
on_edge = (around(sample[:, i], decimal) == around(edges[i][-1], decimal))
# Shift these points one bin to the left.
Ncount[i][where(on_edge & not_smaller_than_edge)[0]] -= 1
# Flattened histogram matrix (1D)
# Reshape is used so that overlarge arrays
# will raise an error.
hist = zeros(nbin, float).reshape(-1)
# Compute the sample indices in the flattened histogram matrix.
ni = nbin.argsort()
xy = zeros(N, int)
for i in arange(0, D-1):
xy += Ncount[ni[i]] * nbin[ni[i+1:]].prod()
xy += Ncount[ni[-1]]
# Compute the number of repetitions in xy and assign it to the
# flattened histmat.
if len(xy) == 0:
return zeros(nbin-2, int), edges
flatcount = bincount(xy, weights)
a = arange(len(flatcount))
hist[a] = flatcount
# Shape into a proper matrix
hist = hist.reshape(sort(nbin))
for i in arange(nbin.size):
j = ni.argsort()[i]
hist = hist.swapaxes(i, j)
ni[i], ni[j] = ni[j], ni[i]
# Remove outliers (indices 0 and -1 for each dimension).
core = D*[slice(1, -1)]
hist = hist[core]
# Normalize if normed is True
if normed:
s = hist.sum()
for i in arange(D):
shape = ones(D, int)
shape[i] = nbin[i] - 2
hist = hist / dedges[i].reshape(shape)
hist /= s
if (hist.shape != nbin - 2).any():
raise RuntimeError(
"Internal Shape Error")
return hist, edges
def average(a, axis=None, weights=None, returned=False):
"""
Compute the weighted average along the specified axis.
Parameters
----------
a : array_like
Array containing data to be averaged. If `a` is not an array, a
conversion is attempted.
axis : int, optional
Axis along which to average `a`. If `None`, averaging is done over
the flattened array.
weights : array_like, optional
An array of weights associated with the values in `a`. Each value in
`a` contributes to the average according to its associated weight.
The weights array can either be 1-D (in which case its length must be
the size of `a` along the given axis) or of the same shape as `a`.
If `weights=None`, then all data in `a` are assumed to have a
weight equal to one.
returned : bool, optional
Default is `False`. If `True`, the tuple (`average`, `sum_of_weights`)
is returned, otherwise only the average is returned.
If `weights=None`, `sum_of_weights` is equivalent to the number of
elements over which the average is taken.
Returns
-------
average, [sum_of_weights] : {array_type, double}
Return the average along the specified axis. When returned is `True`,
return a tuple with the average as the first element and the sum
of the weights as the second element. The return type is `Float`
if `a` is of integer type, otherwise it is of the same type as `a`.
`sum_of_weights` is of the same type as `average`.
Raises
------
ZeroDivisionError
When all weights along axis are zero. See `numpy.ma.average` for a
version robust to this type of error.
TypeError
When the length of 1D `weights` is not the same as the shape of `a`
along axis.
See Also
--------
mean
ma.average : average for masked arrays -- useful if your data contains
"missing" values
Examples
--------
>>> data = range(1,5)
>>> data
[1, 2, 3, 4]
>>> np.average(data)
2.5
>>> np.average(range(1,11), weights=range(10,0,-1))
4.0
>>> data = np.arange(6).reshape((3,2))
>>> data
array([[0, 1],
[2, 3],
[4, 5]])
>>> np.average(data, axis=1, weights=[1./4, 3./4])
array([ 0.75, 2.75, 4.75])
>>> np.average(data, weights=[1./4, 3./4])
Traceback (most recent call last):
...
TypeError: Axis must be specified when shapes of a and weights differ.
"""
if not isinstance(a, np.matrix) :
a = np.asarray(a)
if weights is None :
avg = a.mean(axis)
scl = avg.dtype.type(a.size/avg.size)
else :
a = a + 0.0
wgt = np.array(weights, dtype=a.dtype, copy=0)
# Sanity checks
if a.shape != wgt.shape :
if axis is None :
raise TypeError(
"Axis must be specified when shapes of a "\
"and weights differ.")
if wgt.ndim != 1 :
raise TypeError(
"1D weights expected when shapes of a and "\
"weights differ.")
if wgt.shape[0] != a.shape[axis] :
raise ValueError(
"Length of weights not compatible with "\
"specified axis.")
# setup wgt to broadcast along axis
wgt = np.array(wgt, copy=0, ndmin=a.ndim).swapaxes(-1, axis)
scl = wgt.sum(axis=axis)
if (scl == 0.0).any():
raise ZeroDivisionError(
"Weights sum to zero, can't be normalized")
avg = np.multiply(a, wgt).sum(axis)/scl
if returned:
scl = np.multiply(avg, 0) + scl
return avg, scl
else:
return avg
def asarray_chkfinite(a, dtype=None, order=None):
"""
Convert the input to an array, checking for NaNs or Infs.
Parameters
----------
a : array_like
Input data, in any form that can be converted to an array. This
includes lists, lists of tuples, tuples, tuples of tuples, tuples
of lists and ndarrays. Success requires no NaNs or Infs.
dtype : data-type, optional
By default, the data-type is inferred from the input data.
order : {'C', 'F'}, optional
Whether to use row-major ('C') or column-major ('FORTRAN') memory
representation. Defaults to 'C'.
Returns
-------
out : ndarray
Array interpretation of `a`. No copy is performed if the input
is already an ndarray. If `a` is a subclass of ndarray, a base
class ndarray is returned.
Raises
------
ValueError
Raises ValueError if `a` contains NaN (Not a Number) or Inf (Infinity).
See Also
--------
asarray : Create and array.
asanyarray : Similar function which passes through subclasses.
ascontiguousarray : Convert input to a contiguous array.
asfarray : Convert input to a floating point ndarray.
asfortranarray : Convert input to an ndarray with column-major
memory order.
fromiter : Create an array from an iterator.
fromfunction : Construct an array by executing a function on grid
positions.
Examples
--------
Convert a list into an array. If all elements are finite
``asarray_chkfinite`` is identical to ``asarray``.
>>> a = [1, 2]
>>> np.asarray_chkfinite(a, dtype=float)
array([1., 2.])
Raises ValueError if array_like contains Nans or Infs.
>>> a = [1, 2, np.inf]
>>> try:
... np.asarray_chkfinite(a)
... except ValueError:
... print 'ValueError'
...
ValueError
"""
a = asarray(a, dtype=dtype, order=order)
if a.dtype.char in typecodes['AllFloat'] and not np.isfinite(a).all():
raise ValueError(
"array must not contain infs or NaNs")
return a
def piecewise(x, condlist, funclist, *args, **kw):
"""
Evaluate a piecewise-defined function.
Given a set of conditions and corresponding functions, evaluate each
function on the input data wherever its condition is true.
Parameters
----------
x : ndarray
The input domain.
condlist : list of bool arrays
Each boolean array corresponds to a function in `funclist`. Wherever
`condlist[i]` is True, `funclist[i](x)` is used as the output value.
Each boolean array in `condlist` selects a piece of `x`,
and should therefore be of the same shape as `x`.
The length of `condlist` must correspond to that of `funclist`.
If one extra function is given, i.e. if
``len(funclist) - len(condlist) == 1``, then that extra function
is the default value, used wherever all conditions are false.
funclist : list of callables, f(x,*args,**kw), or scalars
Each function is evaluated over `x` wherever its corresponding
condition is True. It should take an array as input and give an array
or a scalar value as output. If, instead of a callable,
a scalar is provided then a constant function (``lambda x: scalar``) is
assumed.
args : tuple, optional
Any further arguments given to `piecewise` are passed to the functions
upon execution, i.e., if called ``piecewise(..., ..., 1, 'a')``, then
each function is called as ``f(x, 1, 'a')``.
kw : dict, optional
Keyword arguments used in calling `piecewise` are passed to the
functions upon execution, i.e., if called
``piecewise(..., ..., lambda=1)``, then each function is called as
``f(x, lambda=1)``.
Returns
-------
out : ndarray
The output is the same shape and type as x and is found by
calling the functions in `funclist` on the appropriate portions of `x`,
as defined by the boolean arrays in `condlist`. Portions not covered
by any condition have undefined values.
See Also
--------
choose, select, where
Notes
-----
This is similar to choose or select, except that functions are
evaluated on elements of `x` that satisfy the corresponding condition from
`condlist`.
The result is::
|--
|funclist[0](x[condlist[0]])
out = |funclist[1](x[condlist[1]])
|...
|funclist[n2](x[condlist[n2]])
|--
Examples
--------
Define the sigma function, which is -1 for ``x < 0`` and +1 for ``x >= 0``.
>>> x = np.linspace(-2.5, 2.5, 6)
>>> np.piecewise(x, [x < 0, x >= 0], [-1, 1])
array([-1., -1., -1., 1., 1., 1.])
Define the absolute value, which is ``-x`` for ``x <0`` and ``x`` for
``x >= 0``.
>>> np.piecewise(x, [x < 0, x >= 0], [lambda x: -x, lambda x: x])
array([ 2.5, 1.5, 0.5, 0.5, 1.5, 2.5])
"""
x = asanyarray(x)
n2 = len(funclist)
if isscalar(condlist) or \
not (isinstance(condlist[0], list) or
isinstance(condlist[0], ndarray)):
condlist = [condlist]
condlist = [asarray(c, dtype=bool) for c in condlist]
n = len(condlist)
if n == n2-1: # compute the "otherwise" condition.
totlist = condlist[0]
for k in range(1, n):
totlist |= condlist[k]
condlist.append(~totlist)
n += 1
if (n != n2):
raise ValueError(
"function list and condition list must be the same")
zerod = False
# This is a hack to work around problems with NumPy's
# handling of 0-d arrays and boolean indexing with
# numpy.bool_ scalars
if x.ndim == 0:
x = x[None]
zerod = True
newcondlist = []
for k in range(n):
if condlist[k].ndim == 0:
condition = condlist[k][None]
else:
condition = condlist[k]
newcondlist.append(condition)
condlist = newcondlist
y = zeros(x.shape, x.dtype)
for k in range(n):
item = funclist[k]
if not isinstance(item, collections.Callable):
y[condlist[k]] = item
else:
vals = x[condlist[k]]
if vals.size > 0:
y[condlist[k]] = item(vals, *args, **kw)
if zerod:
y = y.squeeze()
return y
def select(condlist, choicelist, default=0):
"""
Return an array drawn from elements in choicelist, depending on conditions.
Parameters
----------
condlist : list of bool ndarrays
The list of conditions which determine from which array in `choicelist`
the output elements are taken. When multiple conditions are satisfied,
the first one encountered in `condlist` is used.
choicelist : list of ndarrays
The list of arrays from which the output elements are taken. It has
to be of the same length as `condlist`.
default : scalar, optional
The element inserted in `output` when all conditions evaluate to False.
Returns
-------
output : ndarray
The output at position m is the m-th element of the array in
`choicelist` where the m-th element of the corresponding array in
`condlist` is True.
See Also
--------
where : Return elements from one of two arrays depending on condition.
take, choose, compress, diag, diagonal
Examples
--------
>>> x = np.arange(10)
>>> condlist = [x<3, x>5]
>>> choicelist = [x, x**2]
>>> np.select(condlist, choicelist)
array([ 0, 1, 2, 0, 0, 0, 36, 49, 64, 81])
"""
n = len(condlist)
n2 = len(choicelist)
if n2 != n:
raise ValueError(
"list of cases must be same length as list of conditions")
choicelist = [default] + choicelist
S = 0
pfac = 1
for k in range(1, n+1):
S += k * pfac * asarray(condlist[k-1])
if k < n:
pfac *= (1-asarray(condlist[k-1]))
# handle special case of a 1-element condition but
# a multi-element choice
if type(S) in ScalarType or max(asarray(S).shape)==1:
pfac = asarray(1)
for k in range(n2+1):
pfac = pfac + asarray(choicelist[k])
if type(S) in ScalarType:
S = S*ones(asarray(pfac).shape, type(S))
else:
S = S*ones(asarray(pfac).shape, S.dtype)
return choose(S, tuple(choicelist))
def copy(a, order='K'):
"""
Return an array copy of the given object.
Parameters
----------
a : array_like
Input data.
order : {'C', 'F', 'A', 'K'}, optional
Controls the memory layout of the copy. 'C' means C-order,
'F' means F-order, 'A' means 'F' if `a` is Fortran contiguous,
'C' otherwise. 'K' means match the layout of `a` as closely
as possible. (Note that this function and :meth:ndarray.copy are very
similar, but have different default values for their order=
arguments.)
Returns
-------
arr : ndarray
Array interpretation of `a`.
Notes
-----
This is equivalent to
>>> np.array(a, copy=True) #doctest: +SKIP
Examples
--------
Create an array x, with a reference y and a copy z:
>>> x = np.array([1, 2, 3])
>>> y = x
>>> z = np.copy(x)
Note that, when we modify x, y changes, but not z:
>>> x[0] = 10
>>> x[0] == y[0]
True
>>> x[0] == z[0]
False
"""
return array(a, order=order, copy=True)
# Basic operations
def gradient(f, *varargs):
"""
Return the gradient of an N-dimensional array.
The gradient is computed using central differences in the interior
and first differences at the boundaries. The returned gradient hence has
the same shape as the input array.
Parameters
----------
f : array_like
An N-dimensional array containing samples of a scalar function.
`*varargs` : scalars
0, 1, or N scalars specifying the sample distances in each direction,
that is: `dx`, `dy`, `dz`, ... The default distance is 1.
Returns
-------
gradient : ndarray
N arrays of the same shape as `f` giving the derivative of `f` with
respect to each dimension.
Examples
--------
>>> x = np.array([1, 2, 4, 7, 11, 16], dtype=np.float)
>>> np.gradient(x)
array([ 1. , 1.5, 2.5, 3.5, 4.5, 5. ])
>>> np.gradient(x, 2)
array([ 0.5 , 0.75, 1.25, 1.75, 2.25, 2.5 ])
>>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=np.float))
[array([[ 2., 2., -1.],
[ 2., 2., -1.]]),
array([[ 1. , 2.5, 4. ],
[ 1. , 1. , 1. ]])]
"""
f = np.asanyarray(f)
N = len(f.shape) # number of dimensions
n = len(varargs)
if n == 0:
dx = [1.0]*N
elif n == 1:
dx = [varargs[0]]*N
elif n == N:
dx = list(varargs)
else:
raise SyntaxError(
"invalid number of arguments")
# use central differences on interior and first differences on endpoints
outvals = []
# create slice objects --- initially all are [:, :, ..., :]
slice1 = [slice(None)]*N
slice2 = [slice(None)]*N
slice3 = [slice(None)]*N
otype = f.dtype.char
if otype not in ['f', 'd', 'F', 'D', 'm', 'M']:
otype = 'd'
# Difference of datetime64 elements results in timedelta64
if otype == 'M' :
# Need to use the full dtype name because it contains unit information
otype = f.dtype.name.replace('datetime', 'timedelta')
elif otype == 'm' :
# Needs to keep the specific units, can't be a general unit
otype = f.dtype
for axis in range(N):
# select out appropriate parts for this dimension
out = np.empty_like(f, dtype=otype)
slice1[axis] = slice(1, -1)
slice2[axis] = slice(2, None)
slice3[axis] = slice(None, -2)
# 1D equivalent -- out[1:-1] = (f[2:] - f[:-2])/2.0
out[slice1] = (f[slice2] - f[slice3])/2.0
slice1[axis] = 0
slice2[axis] = 1
slice3[axis] = 0
# 1D equivalent -- out[0] = (f[1] - f[0])
out[slice1] = (f[slice2] - f[slice3])
slice1[axis] = -1
slice2[axis] = -1
slice3[axis] = -2
# 1D equivalent -- out[-1] = (f[-1] - f[-2])
out[slice1] = (f[slice2] - f[slice3])
# divide by step size
outvals.append(out / dx[axis])
# reset the slice object in this dimension to ":"
slice1[axis] = slice(None)
slice2[axis] = slice(None)
slice3[axis] = slice(None)
if N == 1:
return outvals[0]
else:
return outvals
def diff(a, n=1, axis=-1):
"""
Calculate the n-th order discrete difference along given axis.
The first order difference is given by ``out[n] = a[n+1] - a[n]`` along
the given axis, higher order differences are calculated by using `diff`
recursively.
Parameters
----------
a : array_like
Input array
n : int, optional
The number of times values are differenced.
axis : int, optional
The axis along which the difference is taken, default is the last axis.
Returns
-------
diff : ndarray
The `n` order differences. The shape of the output is the same as `a`
except along `axis` where the dimension is smaller by `n`.
See Also
--------
gradient, ediff1d, cumsum
Examples
--------
>>> x = np.array([1, 2, 4, 7, 0])
>>> np.diff(x)
array([ 1, 2, 3, -7])
>>> np.diff(x, n=2)
array([ 1, 1, -10])
>>> x = np.array([[1, 3, 6, 10], [0, 5, 6, 8]])
>>> np.diff(x)
array([[2, 3, 4],
[5, 1, 2]])
>>> np.diff(x, axis=0)
array([[-1, 2, 0, -2]])
"""
if n == 0:
return a
if n < 0:
raise ValueError(
"order must be non-negative but got " + repr(n))
a = asanyarray(a)
nd = len(a.shape)
slice1 = [slice(None)]*nd
slice2 = [slice(None)]*nd
slice1[axis] = slice(1, None)
slice2[axis] = slice(None, -1)
slice1 = tuple(slice1)
slice2 = tuple(slice2)
if n > 1:
return diff(a[slice1]-a[slice2], n-1, axis=axis)
else:
return a[slice1]-a[slice2]
def interp(x, xp, fp, left=None, right=None):
"""
One-dimensional linear interpolation.
Returns the one-dimensional piecewise linear interpolant to a function
with given values at discrete data-points.
Parameters
----------
x : array_like
The x-coordinates of the interpolated values.
xp : 1-D sequence of floats
The x-coordinates of the data points, must be increasing.
fp : 1-D sequence of floats
The y-coordinates of the data points, same length as `xp`.
left : float, optional
Value to return for `x < xp[0]`, default is `fp[0]`.
right : float, optional
Value to return for `x > xp[-1]`, defaults is `fp[-1]`.
Returns
-------
y : {float, ndarray}
The interpolated values, same shape as `x`.
Raises
------
ValueError
If `xp` and `fp` have different length
Notes
-----
Does not check that the x-coordinate sequence `xp` is increasing.
If `xp` is not increasing, the results are nonsense.
A simple check for increasing is::
np.all(np.diff(xp) > 0)
Examples
--------
>>> xp = [1, 2, 3]
>>> fp = [3, 2, 0]
>>> np.interp(2.5, xp, fp)
1.0
>>> np.interp([0, 1, 1.5, 2.72, 3.14], xp, fp)
array([ 3. , 3. , 2.5 , 0.56, 0. ])
>>> UNDEF = -99.0
>>> np.interp(3.14, xp, fp, right=UNDEF)
-99.0
Plot an interpolant to the sine function:
>>> x = np.linspace(0, 2*np.pi, 10)
>>> y = np.sin(x)
>>> xvals = np.linspace(0, 2*np.pi, 50)
>>> yinterp = np.interp(xvals, x, y)
>>> import matplotlib.pyplot as plt
>>> plt.plot(x, y, 'o')
[<matplotlib.lines.Line2D object at 0x...>]
>>> plt.plot(xvals, yinterp, '-x')
[<matplotlib.lines.Line2D object at 0x...>]
>>> plt.show()
"""
if isinstance(x, (float, int, number)):
return compiled_interp([x], xp, fp, left, right).item()
elif isinstance(x, np.ndarray) and x.ndim == 0:
return compiled_interp([x], xp, fp, left, right).item()
else:
return compiled_interp(x, xp, fp, left, right)
def angle(z, deg=0):
"""
Return the angle of the complex argument.
Parameters
----------
z : array_like
A complex number or sequence of complex numbers.
deg : bool, optional
Return angle in degrees if True, radians if False (default).
Returns
-------
angle : {ndarray, scalar}
The counterclockwise angle from the positive real axis on
the complex plane, with dtype as numpy.float64.
See Also
--------
arctan2
absolute
Examples
--------
>>> np.angle([1.0, 1.0j, 1+1j]) # in radians
array([ 0. , 1.57079633, 0.78539816])
>>> np.angle(1+1j, deg=True) # in degrees
45.0
"""
if deg:
fact = 180/pi
else:
fact = 1.0
z = asarray(z)
if (issubclass(z.dtype.type, _nx.complexfloating)):
zimag = z.imag
zreal = z.real
else:
zimag = 0
zreal = z
return arctan2(zimag, zreal) * fact
def unwrap(p, discont=pi, axis=-1):
"""
Unwrap by changing deltas between values to 2*pi complement.
Unwrap radian phase `p` by changing absolute jumps greater than
`discont` to their 2*pi complement along the given axis.
Parameters
----------
p : array_like
Input array.
discont : float, optional
Maximum discontinuity between values, default is ``pi``.
axis : int, optional
Axis along which unwrap will operate, default is the last axis.
Returns
-------
out : ndarray
Output array.
See Also
--------
rad2deg, deg2rad
Notes
-----
If the discontinuity in `p` is smaller than ``pi``, but larger than
`discont`, no unwrapping is done because taking the 2*pi complement
would only make the discontinuity larger.
Examples
--------
>>> phase = np.linspace(0, np.pi, num=5)
>>> phase[3:] += np.pi
>>> phase
array([ 0. , 0.78539816, 1.57079633, 5.49778714, 6.28318531])
>>> np.unwrap(phase)
array([ 0. , 0.78539816, 1.57079633, -0.78539816, 0. ])
"""
p = asarray(p)
nd = len(p.shape)
dd = diff(p, axis=axis)
slice1 = [slice(None, None)]*nd # full slices
slice1[axis] = slice(1, None)
ddmod = mod(dd+pi, 2*pi)-pi
_nx.copyto(ddmod, pi, where=(ddmod==-pi) & (dd > 0))
ph_correct = ddmod - dd;
_nx.copyto(ph_correct, 0, where=abs(dd)<discont)
up = array(p, copy=True, dtype='d')
up[slice1] = p[slice1] + ph_correct.cumsum(axis)
return up
def sort_complex(a):
"""
Sort a complex array using the real part first, then the imaginary part.
Parameters
----------
a : array_like
Input array
Returns
-------
out : complex ndarray
Always returns a sorted complex array.
Examples
--------
>>> np.sort_complex([5, 3, 6, 2, 1])
array([ 1.+0.j, 2.+0.j, 3.+0.j, 5.+0.j, 6.+0.j])
>>> np.sort_complex([1 + 2j, 2 - 1j, 3 - 2j, 3 - 3j, 3 + 5j])
array([ 1.+2.j, 2.-1.j, 3.-3.j, 3.-2.j, 3.+5.j])
"""
b = array(a, copy=True)
b.sort()
if not issubclass(b.dtype.type, _nx.complexfloating):
if b.dtype.char in 'bhBH':
return b.astype('F')
elif b.dtype.char == 'g':
return b.astype('G')
else:
return b.astype('D')
else:
return b
def trim_zeros(filt, trim='fb'):
"""
Trim the leading and/or trailing zeros from a 1-D array or sequence.
Parameters
----------
filt : 1-D array or sequence
Input array.
trim : str, optional
A string with 'f' representing trim from front and 'b' to trim from
back. Default is 'fb', trim zeros from both front and back of the
array.
Returns
-------
trimmed : 1-D array or sequence
The result of trimming the input. The input data type is preserved.
Examples
--------
>>> a = np.array((0, 0, 0, 1, 2, 3, 0, 2, 1, 0))
>>> np.trim_zeros(a)
array([1, 2, 3, 0, 2, 1])
>>> np.trim_zeros(a, 'b')
array([0, 0, 0, 1, 2, 3, 0, 2, 1])
The input data type is preserved, list/tuple in means list/tuple out.
>>> np.trim_zeros([0, 1, 2, 0])
[1, 2]
"""
first = 0
trim = trim.upper()
if 'F' in trim:
for i in filt:
if i != 0.: break
else: first = first + 1
last = len(filt)
if 'B' in trim:
for i in filt[::-1]:
if i != 0.: break
else: last = last - 1
return filt[first:last]
import sys
if sys.hexversion < 0x2040000:
from sets import Set as set
@deprecate
def unique(x):
"""
This function is deprecated. Use numpy.lib.arraysetops.unique()
instead.
"""
try:
tmp = x.flatten()
if tmp.size == 0:
return tmp
tmp.sort()
idx = concatenate(([True], tmp[1:]!=tmp[:-1]))
return tmp[idx]
except AttributeError:
items = sorted(set(x))
return asarray(items)
def extract(condition, arr):
"""
Return the elements of an array that satisfy some condition.
This is equivalent to ``np.compress(ravel(condition), ravel(arr))``. If
`condition` is boolean ``np.extract`` is equivalent to ``arr[condition]``.
Parameters
----------
condition : array_like
An array whose nonzero or True entries indicate the elements of `arr`
to extract.
arr : array_like
Input array of the same size as `condition`.
Returns
-------
extract : ndarray
Rank 1 array of values from `arr` where `condition` is True.
See Also
--------
take, put, copyto, compress
Examples
--------
>>> arr = np.arange(12).reshape((3, 4))
>>> arr
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
>>> condition = np.mod(arr, 3)==0
>>> condition
array([[ True, False, False, True],
[False, False, True, False],
[False, True, False, False]], dtype=bool)
>>> np.extract(condition, arr)
array([0, 3, 6, 9])
If `condition` is boolean:
>>> arr[condition]
array([0, 3, 6, 9])
"""
return _nx.take(ravel(arr), nonzero(ravel(condition))[0])
def place(arr, mask, vals):
"""
Change elements of an array based on conditional and input values.
Similar to ``np.copyto(arr, vals, where=mask)``, the difference is that
`place` uses the first N elements of `vals`, where N is the number of
True values in `mask`, while `copyto` uses the elements where `mask`
is True.
Note that `extract` does the exact opposite of `place`.
Parameters
----------
arr : array_like
Array to put data into.
mask : array_like
Boolean mask array. Must have the same size as `a`.
vals : 1-D sequence
Values to put into `a`. Only the first N elements are used, where
N is the number of True values in `mask`. If `vals` is smaller
than N it will be repeated.
See Also
--------
copyto, put, take, extract
Examples
--------
>>> arr = np.arange(6).reshape(2, 3)
>>> np.place(arr, arr>2, [44, 55])
>>> arr
array([[ 0, 1, 2],
[44, 55, 44]])
"""
return _insert(arr, mask, vals)
def disp(mesg, device=None, linefeed=True):
"""
Display a message on a device.
Parameters
----------
mesg : str
Message to display.
device : object
Device to write message. If None, defaults to ``sys.stdout`` which is
very similar to ``print``. `device` needs to have ``write()`` and
``flush()`` methods.
linefeed : bool, optional
Option whether to print a line feed or not. Defaults to True.
Raises
------
AttributeError
If `device` does not have a ``write()`` or ``flush()`` method.
Examples
--------
Besides ``sys.stdout``, a file-like object can also be used as it has
both required methods:
>>> from StringIO import StringIO
>>> buf = StringIO()
>>> np.disp('"Display" in a file', device=buf)
>>> buf.getvalue()
'"Display" in a file\\n'
"""
if device is None:
import sys
device = sys.stdout
if linefeed:
device.write('%s\n' % mesg)
else:
device.write('%s' % mesg)
device.flush()
return
class vectorize(object):
"""
vectorize(pyfunc, otypes='', doc=None, excluded=None, cache=False)
Generalized function class.
Define a vectorized function which takes a nested sequence
of objects or numpy arrays as inputs and returns a
numpy array as output. The vectorized function evaluates `pyfunc` over
successive tuples of the input arrays like the python map function,
except it uses the broadcasting rules of numpy.
The data type of the output of `vectorized` is determined by calling
the function with the first element of the input. This can be avoided
by specifying the `otypes` argument.
Parameters
----------
pyfunc : callable
A python function or method.
otypes : str or list of dtypes, optional
The output data type. It must be specified as either a string of
typecode characters or a list of data type specifiers. There should
be one data type specifier for each output.
doc : str, optional
The docstring for the function. If `None`, the docstring will be the
``pyfunc.__doc__``.
excluded : set, optional
Set of strings or integers representing the positional or keyword
arguments for which the function will not be vectorized. These will be
passed directly to `pyfunc` unmodified.
.. versionadded:: 1.7.0
cache : bool, optional
If `True`, then cache the first function call that determines the number
of outputs if `otypes` is not provided.
.. versionadded:: 1.7.0
Returns
-------
vectorized : callable
Vectorized function.
Examples
--------
>>> def myfunc(a, b):
... "Return a-b if a>b, otherwise return a+b"
... if a > b:
... return a - b
... else:
... return a + b
>>> vfunc = np.vectorize(myfunc)
>>> vfunc([1, 2, 3, 4], 2)
array([3, 4, 1, 2])
The docstring is taken from the input function to `vectorize` unless it
is specified
>>> vfunc.__doc__
'Return a-b if a>b, otherwise return a+b'
>>> vfunc = np.vectorize(myfunc, doc='Vectorized `myfunc`')
>>> vfunc.__doc__
'Vectorized `myfunc`'
The output type is determined by evaluating the first element of the input,
unless it is specified
>>> out = vfunc([1, 2, 3, 4], 2)
>>> type(out[0])
<type 'numpy.int32'>
>>> vfunc = np.vectorize(myfunc, otypes=[np.float])
>>> out = vfunc([1, 2, 3, 4], 2)
>>> type(out[0])
<type 'numpy.float64'>
The `excluded` argument can be used to prevent vectorizing over certain
arguments. This can be useful for array-like arguments of a fixed length
such as the coefficients for a polynomial as in `polyval`:
>>> def mypolyval(p, x):
... _p = list(p)
... res = _p.pop(0)
... while _p:
... res = res*x + _p.pop(0)
... return res
>>> vpolyval = np.vectorize(mypolyval, excluded=['p'])
>>> vpolyval(p=[1, 2, 3], x=[0, 1])
array([3, 6])
Positional arguments may also be excluded by specifying their position:
>>> vpolyval.excluded.add(0)
>>> vpolyval([1, 2, 3], x=[0, 1])
array([3, 6])
Notes
-----
The `vectorize` function is provided primarily for convenience, not for
performance. The implementation is essentially a for loop.
If `otypes` is not specified, then a call to the function with the
first argument will be used to determine the number of outputs. The
results of this call will be cached if `cache` is `True` to prevent
calling the function twice. However, to implement the cache, the
original function must be wrapped which will slow down subsequent
calls, so only do this if your function is expensive.
The new keyword argument interface and `excluded` argument support
further degrades performance.
"""
def __init__(self, pyfunc, otypes='', doc=None, excluded=None, cache=False):
self.pyfunc = pyfunc
self.cache = cache
if doc is None:
self.__doc__ = pyfunc.__doc__
else:
self.__doc__ = doc
if isinstance(otypes, str):
self.otypes = otypes
for char in self.otypes:
if char not in typecodes['All']:
raise ValueError("Invalid otype specified: %s" % (char,))
elif iterable(otypes):
self.otypes = ''.join([_nx.dtype(x).char for x in otypes])
else:
raise ValueError("Invalid otype specification")
# Excluded variable support
if excluded is None:
excluded = set()
self.excluded = set(excluded)
if self.otypes and not self.excluded:
self._ufunc = None # Caching to improve default performance
def __call__(self, *args, **kwargs):
"""
Return arrays with the results of `pyfunc` broadcast (vectorized) over
`args` and `kwargs` not in `excluded`.
"""
excluded = self.excluded
if not kwargs and not excluded:
func = self.pyfunc
vargs = args
else:
# The wrapper accepts only positional arguments: we use `names` and
# `inds` to mutate `the_args` and `kwargs` to pass to the original
# function.
nargs = len(args)
names = [_n for _n in kwargs if _n not in excluded]
inds = [_i for _i in range(nargs) if _i not in excluded]
the_args = list(args)
def func(*vargs):
for _n, _i in enumerate(inds):
the_args[_i] = vargs[_n]
kwargs.update(zip(names, vargs[len(inds):]))
return self.pyfunc(*the_args, **kwargs)
vargs = [args[_i] for _i in inds]
vargs.extend([kwargs[_n] for _n in names])
return self._vectorize_call(func=func, args=vargs)
def _get_ufunc_and_otypes(self, func, args):
"""Return (ufunc, otypes)."""
# frompyfunc will fail if args is empty
assert args
if self.otypes:
otypes = self.otypes
nout = len(otypes)
# Note logic here: We only *use* self._ufunc if func is self.pyfunc
# even though we set self._ufunc regardless.
if func is self.pyfunc and self._ufunc is not None:
ufunc = self._ufunc
else:
ufunc = self._ufunc = frompyfunc(func, len(args), nout)
else:
# Get number of outputs and output types by calling the function on
# the first entries of args. We also cache the result to prevent
# the subsequent call when the ufunc is evaluated.
# Assumes that ufunc first evaluates the 0th elements in the input
# arrays (the input values are not checked to ensure this)
inputs = [asarray(_a).flat[0] for _a in args]
outputs = func(*inputs)
# Performance note: profiling indicates that -- for simple functions
# at least -- this wrapping can almost double the execution time.
# Hence we make it optional.
if self.cache:
_cache = [outputs]
def _func(*vargs):
if _cache:
return _cache.pop()
else:
return func(*vargs)
else:
_func = func
if isinstance(outputs, tuple):
nout = len(outputs)
else:
nout = 1
outputs = (outputs,)
otypes = ''.join([asarray(outputs[_k]).dtype.char
for _k in range(nout)])
# Performance note: profiling indicates that creating the ufunc is
# not a significant cost compared with wrapping so it seems not
# worth trying to cache this.
ufunc = frompyfunc(_func, len(args), nout)
return ufunc, otypes
def _vectorize_call(self, func, args):
"""Vectorized call to `func` over positional `args`."""
if not args:
_res = func()
else:
ufunc, otypes = self._get_ufunc_and_otypes(func=func, args=args)
# Convert args to object arrays first
inputs = [array(_a, copy=False, subok=True, dtype=object)
for _a in args]
outputs = ufunc(*inputs)
if ufunc.nout == 1:
_res = array(outputs,
copy=False, subok=True, dtype=otypes[0])
else:
_res = tuple([array(_x, copy=False, subok=True, dtype=_t)
for _x, _t in zip(outputs, otypes)])
return _res
def cov(m, y=None, rowvar=1, bias=0, ddof=None):
"""
Estimate a covariance matrix, given data.
Covariance indicates the level to which two variables vary together.
If we examine N-dimensional samples, :math:`X = [x_1, x_2, ... x_N]^T`,
then the covariance matrix element :math:`C_{ij}` is the covariance of
:math:`x_i` and :math:`x_j`. The element :math:`C_{ii}` is the variance
of :math:`x_i`.
Parameters
----------
m : array_like
A 1-D or 2-D array containing multiple variables and observations.
Each row of `m` represents a variable, and each column a single
observation of all those variables. Also see `rowvar` below.
y : array_like, optional
An additional set of variables and observations. `y` has the same
form as that of `m`.
rowvar : int, optional
If `rowvar` is non-zero (default), then each row represents a
variable, with observations in the columns. Otherwise, the relationship
is transposed: each column represents a variable, while the rows
contain observations.
bias : int, optional
Default normalization is by ``(N - 1)``, where ``N`` is the number of
observations given (unbiased estimate). If `bias` is 1, then
normalization is by ``N``. These values can be overridden by using
the keyword ``ddof`` in numpy versions >= 1.5.
ddof : int, optional
.. versionadded:: 1.5
If not ``None`` normalization is by ``(N - ddof)``, where ``N`` is
the number of observations; this overrides the value implied by
``bias``. The default value is ``None``.
Returns
-------
out : ndarray
The covariance matrix of the variables.
See Also
--------
corrcoef : Normalized covariance matrix
Examples
--------
Consider two variables, :math:`x_0` and :math:`x_1`, which
correlate perfectly, but in opposite directions:
>>> x = np.array([[0, 2], [1, 1], [2, 0]]).T
>>> x
array([[0, 1, 2],
[2, 1, 0]])
Note how :math:`x_0` increases while :math:`x_1` decreases. The covariance
matrix shows this clearly:
>>> np.cov(x)
array([[ 1., -1.],
[-1., 1.]])
Note that element :math:`C_{0,1}`, which shows the correlation between
:math:`x_0` and :math:`x_1`, is negative.
Further, note how `x` and `y` are combined:
>>> x = [-2.1, -1, 4.3]
>>> y = [3, 1.1, 0.12]
>>> X = np.vstack((x,y))
>>> print np.cov(X)
[[ 11.71 -4.286 ]
[ -4.286 2.14413333]]
>>> print np.cov(x, y)
[[ 11.71 -4.286 ]
[ -4.286 2.14413333]]
>>> print np.cov(x)
11.71
"""
# Check inputs
if ddof is not None and ddof != int(ddof):
raise ValueError("ddof must be integer")
X = array(m, ndmin=2, dtype=float)
if X.size == 0:
# handle empty arrays
return np.array(m)
if X.shape[0] == 1:
rowvar = 1
if rowvar:
axis = 0
tup = (slice(None), newaxis)
else:
axis = 1
tup = (newaxis, slice(None))
if y is not None:
y = array(y, copy=False, ndmin=2, dtype=float)
X = concatenate((X, y), axis)
X -= X.mean(axis=1-axis)[tup]
if rowvar:
N = X.shape[1]
else:
N = X.shape[0]
if ddof is None:
if bias == 0:
ddof = 1
else:
ddof = 0
fact = float(N - ddof)
if not rowvar:
return (dot(X.T, X.conj()) / fact).squeeze()
else:
return (dot(X, X.T.conj()) / fact).squeeze()
def corrcoef(x, y=None, rowvar=1, bias=0, ddof=None):
"""
Return correlation coefficients.
Please refer to the documentation for `cov` for more detail. The
relationship between the correlation coefficient matrix, `P`, and the
covariance matrix, `C`, is
.. math:: P_{ij} = \\frac{ C_{ij} } { \\sqrt{ C_{ii} * C_{jj} } }
The values of `P` are between -1 and 1, inclusive.
Parameters
----------
x : array_like
A 1-D or 2-D array containing multiple variables and observations.
Each row of `m` represents a variable, and each column a single
observation of all those variables. Also see `rowvar` below.
y : array_like, optional
An additional set of variables and observations. `y` has the same
shape as `m`.
rowvar : int, optional
If `rowvar` is non-zero (default), then each row represents a
variable, with observations in the columns. Otherwise, the relationship
is transposed: each column represents a variable, while the rows
contain observations.
bias : int, optional
Default normalization is by ``(N - 1)``, where ``N`` is the number of
observations (unbiased estimate). If `bias` is 1, then
normalization is by ``N``. These values can be overridden by using
the keyword ``ddof`` in numpy versions >= 1.5.
ddof : {None, int}, optional
.. versionadded:: 1.5
If not ``None`` normalization is by ``(N - ddof)``, where ``N`` is
the number of observations; this overrides the value implied by
``bias``. The default value is ``None``.
Returns
-------
out : ndarray
The correlation coefficient matrix of the variables.
See Also
--------
cov : Covariance matrix
"""
c = cov(x, y, rowvar, bias, ddof)
if c.size == 0:
# handle empty arrays
return c
try:
d = diag(c)
except ValueError: # scalar covariance
return 1
return c/sqrt(multiply.outer(d, d))
def blackman(M):
"""
Return the Blackman window.
The Blackman window is a taper formed by using the first three
terms of a summation of cosines. It was designed to have close to the
minimal leakage possible. It is close to optimal, only slightly worse
than a Kaiser window.
Parameters
----------
M : int
Number of points in the output window. If zero or less, an empty
array is returned.
Returns
-------
out : ndarray
The window, with the maximum value normalized to one (the value one
appears only if the number of samples is odd).
See Also
--------
bartlett, hamming, hanning, kaiser
Notes
-----
The Blackman window is defined as
.. math:: w(n) = 0.42 - 0.5 \\cos(2\\pi n/M) + 0.08 \\cos(4\\pi n/M)
Most references to the Blackman window come from the signal processing
literature, where it is used as one of many windowing functions for
smoothing values. It is also known as an apodization (which means
"removing the foot", i.e. smoothing discontinuities at the beginning
and end of the sampled signal) or tapering function. It is known as a
"near optimal" tapering function, almost as good (by some measures)
as the kaiser window.
References
----------
Blackman, R.B. and Tukey, J.W., (1958) The measurement of power spectra,
Dover Publications, New York.
Oppenheim, A.V., and R.W. Schafer. Discrete-Time Signal Processing.
Upper Saddle River, NJ: Prentice-Hall, 1999, pp. 468-471.
Examples
--------
>>> np.blackman(12)
array([ -1.38777878e-17, 3.26064346e-02, 1.59903635e-01,
4.14397981e-01, 7.36045180e-01, 9.67046769e-01,
9.67046769e-01, 7.36045180e-01, 4.14397981e-01,
1.59903635e-01, 3.26064346e-02, -1.38777878e-17])
Plot the window and the frequency response:
>>> from numpy.fft import fft, fftshift
>>> window = np.blackman(51)
>>> plt.plot(window)
[<matplotlib.lines.Line2D object at 0x...>]
>>> plt.title("Blackman window")
<matplotlib.text.Text object at 0x...>
>>> plt.ylabel("Amplitude")
<matplotlib.text.Text object at 0x...>
>>> plt.xlabel("Sample")
<matplotlib.text.Text object at 0x...>
>>> plt.show()
>>> plt.figure()
<matplotlib.figure.Figure object at 0x...>
>>> A = fft(window, 2048) / 25.5
>>> mag = np.abs(fftshift(A))
>>> freq = np.linspace(-0.5, 0.5, len(A))
>>> response = 20 * np.log10(mag)
>>> response = np.clip(response, -100, 100)
>>> plt.plot(freq, response)
[<matplotlib.lines.Line2D object at 0x...>]
>>> plt.title("Frequency response of Blackman window")
<matplotlib.text.Text object at 0x...>
>>> plt.ylabel("Magnitude [dB]")
<matplotlib.text.Text object at 0x...>
>>> plt.xlabel("Normalized frequency [cycles per sample]")
<matplotlib.text.Text object at 0x...>
>>> plt.axis('tight')
(-0.5, 0.5, -100.0, ...)
>>> plt.show()
"""
if M < 1:
return array([])
if M == 1:
return ones(1, float)
n = arange(0, M)
return 0.42-0.5*cos(2.0*pi*n/(M-1)) + 0.08*cos(4.0*pi*n/(M-1))
def bartlett(M):
"""
Return the Bartlett window.
The Bartlett window is very similar to a triangular window, except
that the end points are at zero. It is often used in signal
processing for tapering a signal, without generating too much
ripple in the frequency domain.
Parameters
----------
M : int
Number of points in the output window. If zero or less, an
empty array is returned.
Returns
-------
out : array
The triangular window, with the maximum value normalized to one
(the value one appears only if the number of samples is odd), with
the first and last samples equal to zero.
See Also
--------
blackman, hamming, hanning, kaiser
Notes
-----
The Bartlett window is defined as
.. math:: w(n) = \\frac{2}{M-1} \\left(
\\frac{M-1}{2} - \\left|n - \\frac{M-1}{2}\\right|
\\right)
Most references to the Bartlett window come from the signal
processing literature, where it is used as one of many windowing
functions for smoothing values. Note that convolution with this
window produces linear interpolation. It is also known as an
apodization (which means"removing the foot", i.e. smoothing
discontinuities at the beginning and end of the sampled signal) or
tapering function. The fourier transform of the Bartlett is the product
of two sinc functions.
Note the excellent discussion in Kanasewich.
References
----------
.. [1] M.S. Bartlett, "Periodogram Analysis and Continuous Spectra",
Biometrika 37, 1-16, 1950.
.. [2] E.R. Kanasewich, "Time Sequence Analysis in Geophysics",
The University of Alberta Press, 1975, pp. 109-110.
.. [3] A.V. Oppenheim and R.W. Schafer, "Discrete-Time Signal
Processing", Prentice-Hall, 1999, pp. 468-471.
.. [4] Wikipedia, "Window function",
http://en.wikipedia.org/wiki/Window_function
.. [5] W.H. Press, B.P. Flannery, S.A. Teukolsky, and W.T. Vetterling,
"Numerical Recipes", Cambridge University Press, 1986, page 429.
Examples
--------
>>> np.bartlett(12)
array([ 0. , 0.18181818, 0.36363636, 0.54545455, 0.72727273,
0.90909091, 0.90909091, 0.72727273, 0.54545455, 0.36363636,
0.18181818, 0. ])
Plot the window and its frequency response (requires SciPy and matplotlib):
>>> from numpy.fft import fft, fftshift
>>> window = np.bartlett(51)
>>> plt.plot(window)
[<matplotlib.lines.Line2D object at 0x...>]
>>> plt.title("Bartlett window")
<matplotlib.text.Text object at 0x...>
>>> plt.ylabel("Amplitude")
<matplotlib.text.Text object at 0x...>
>>> plt.xlabel("Sample")
<matplotlib.text.Text object at 0x...>
>>> plt.show()
>>> plt.figure()
<matplotlib.figure.Figure object at 0x...>
>>> A = fft(window, 2048) / 25.5
>>> mag = np.abs(fftshift(A))
>>> freq = np.linspace(-0.5, 0.5, len(A))
>>> response = 20 * np.log10(mag)
>>> response = np.clip(response, -100, 100)
>>> plt.plot(freq, response)
[<matplotlib.lines.Line2D object at 0x...>]
>>> plt.title("Frequency response of Bartlett window")
<matplotlib.text.Text object at 0x...>
>>> plt.ylabel("Magnitude [dB]")
<matplotlib.text.Text object at 0x...>
>>> plt.xlabel("Normalized frequency [cycles per sample]")
<matplotlib.text.Text object at 0x...>
>>> plt.axis('tight')
(-0.5, 0.5, -100.0, ...)
>>> plt.show()
"""
if M < 1:
return array([])
if M == 1:
return ones(1, float)
n = arange(0, M)
return where(less_equal(n, (M-1)/2.0), 2.0*n/(M-1), 2.0-2.0*n/(M-1))
def hanning(M):
"""
Return the Hanning window.
The Hanning window is a taper formed by using a weighted cosine.
Parameters
----------
M : int
Number of points in the output window. If zero or less, an
empty array is returned.
Returns
-------
out : ndarray, shape(M,)
The window, with the maximum value normalized to one (the value
one appears only if `M` is odd).
See Also
--------
bartlett, blackman, hamming, kaiser
Notes
-----
The Hanning window is defined as
.. math:: w(n) = 0.5 - 0.5cos\\left(\\frac{2\\pi{n}}{M-1}\\right)
\\qquad 0 \\leq n \\leq M-1
The Hanning was named for Julius van Hann, an Austrian meteorologist.
It is also known as the Cosine Bell. Some authors prefer that it be
called a Hann window, to help avoid confusion with the very similar
Hamming window.
Most references to the Hanning window come from the signal processing
literature, where it is used as one of many windowing functions for
smoothing values. It is also known as an apodization (which means
"removing the foot", i.e. smoothing discontinuities at the beginning
and end of the sampled signal) or tapering function.
References
----------
.. [1] Blackman, R.B. and Tukey, J.W., (1958) The measurement of power
spectra, Dover Publications, New York.
.. [2] E.R. Kanasewich, "Time Sequence Analysis in Geophysics",
The University of Alberta Press, 1975, pp. 106-108.
.. [3] Wikipedia, "Window function",
http://en.wikipedia.org/wiki/Window_function
.. [4] W.H. Press, B.P. Flannery, S.A. Teukolsky, and W.T. Vetterling,
"Numerical Recipes", Cambridge University Press, 1986, page 425.
Examples
--------
>>> np.hanning(12)
array([ 0. , 0.07937323, 0.29229249, 0.57115742, 0.82743037,
0.97974649, 0.97974649, 0.82743037, 0.57115742, 0.29229249,
0.07937323, 0. ])
Plot the window and its frequency response:
>>> from numpy.fft import fft, fftshift
>>> window = np.hanning(51)
>>> plt.plot(window)
[<matplotlib.lines.Line2D object at 0x...>]
>>> plt.title("Hann window")
<matplotlib.text.Text object at 0x...>
>>> plt.ylabel("Amplitude")
<matplotlib.text.Text object at 0x...>
>>> plt.xlabel("Sample")
<matplotlib.text.Text object at 0x...>
>>> plt.show()
>>> plt.figure()
<matplotlib.figure.Figure object at 0x...>
>>> A = fft(window, 2048) / 25.5
>>> mag = np.abs(fftshift(A))
>>> freq = np.linspace(-0.5, 0.5, len(A))
>>> response = 20 * np.log10(mag)
>>> response = np.clip(response, -100, 100)
>>> plt.plot(freq, response)
[<matplotlib.lines.Line2D object at 0x...>]
>>> plt.title("Frequency response of the Hann window")
<matplotlib.text.Text object at 0x...>
>>> plt.ylabel("Magnitude [dB]")
<matplotlib.text.Text object at 0x...>
>>> plt.xlabel("Normalized frequency [cycles per sample]")
<matplotlib.text.Text object at 0x...>
>>> plt.axis('tight')
(-0.5, 0.5, -100.0, ...)
>>> plt.show()
"""
if M < 1:
return array([])
if M == 1:
return ones(1, float)
n = arange(0, M)
return 0.5-0.5*cos(2.0*pi*n/(M-1))
def hamming(M):
"""
Return the Hamming window.
The Hamming window is a taper formed by using a weighted cosine.
Parameters
----------
M : int
Number of points in the output window. If zero or less, an
empty array is returned.
Returns
-------
out : ndarray
The window, with the maximum value normalized to one (the value
one appears only if the number of samples is odd).
See Also
--------
bartlett, blackman, hanning, kaiser
Notes
-----
The Hamming window is defined as
.. math:: w(n) = 0.54 - 0.46cos\\left(\\frac{2\\pi{n}}{M-1}\\right)
\\qquad 0 \\leq n \\leq M-1
The Hamming was named for R. W. Hamming, an associate of J. W. Tukey
and is described in Blackman and Tukey. It was recommended for
smoothing the truncated autocovariance function in the time domain.
Most references to the Hamming window come from the signal processing
literature, where it is used as one of many windowing functions for
smoothing values. It is also known as an apodization (which means
"removing the foot", i.e. smoothing discontinuities at the beginning
and end of the sampled signal) or tapering function.
References
----------
.. [1] Blackman, R.B. and Tukey, J.W., (1958) The measurement of power
spectra, Dover Publications, New York.
.. [2] E.R. Kanasewich, "Time Sequence Analysis in Geophysics", The
University of Alberta Press, 1975, pp. 109-110.
.. [3] Wikipedia, "Window function",
http://en.wikipedia.org/wiki/Window_function
.. [4] W.H. Press, B.P. Flannery, S.A. Teukolsky, and W.T. Vetterling,
"Numerical Recipes", Cambridge University Press, 1986, page 425.
Examples
--------
>>> np.hamming(12)
array([ 0.08 , 0.15302337, 0.34890909, 0.60546483, 0.84123594,
0.98136677, 0.98136677, 0.84123594, 0.60546483, 0.34890909,
0.15302337, 0.08 ])
Plot the window and the frequency response:
>>> from numpy.fft import fft, fftshift
>>> window = np.hamming(51)
>>> plt.plot(window)
[<matplotlib.lines.Line2D object at 0x...>]
>>> plt.title("Hamming window")
<matplotlib.text.Text object at 0x...>
>>> plt.ylabel("Amplitude")
<matplotlib.text.Text object at 0x...>
>>> plt.xlabel("Sample")
<matplotlib.text.Text object at 0x...>
>>> plt.show()
>>> plt.figure()
<matplotlib.figure.Figure object at 0x...>
>>> A = fft(window, 2048) / 25.5
>>> mag = np.abs(fftshift(A))
>>> freq = np.linspace(-0.5, 0.5, len(A))
>>> response = 20 * np.log10(mag)
>>> response = np.clip(response, -100, 100)
>>> plt.plot(freq, response)
[<matplotlib.lines.Line2D object at 0x...>]
>>> plt.title("Frequency response of Hamming window")
<matplotlib.text.Text object at 0x...>
>>> plt.ylabel("Magnitude [dB]")
<matplotlib.text.Text object at 0x...>
>>> plt.xlabel("Normalized frequency [cycles per sample]")
<matplotlib.text.Text object at 0x...>
>>> plt.axis('tight')
(-0.5, 0.5, -100.0, ...)
>>> plt.show()
"""
if M < 1:
return array([])
if M == 1:
return ones(1, float)
n = arange(0, M)
return 0.54-0.46*cos(2.0*pi*n/(M-1))
## Code from cephes for i0
_i0A = [
-4.41534164647933937950E-18,
3.33079451882223809783E-17,
-2.43127984654795469359E-16,
1.71539128555513303061E-15,
-1.16853328779934516808E-14,
7.67618549860493561688E-14,
-4.85644678311192946090E-13,
2.95505266312963983461E-12,
-1.72682629144155570723E-11,
9.67580903537323691224E-11,
-5.18979560163526290666E-10,
2.65982372468238665035E-9,
-1.30002500998624804212E-8,
6.04699502254191894932E-8,
-2.67079385394061173391E-7,
1.11738753912010371815E-6,
-4.41673835845875056359E-6,
1.64484480707288970893E-5,
-5.75419501008210370398E-5,
1.88502885095841655729E-4,
-5.76375574538582365885E-4,
1.63947561694133579842E-3,
-4.32430999505057594430E-3,
1.05464603945949983183E-2,
-2.37374148058994688156E-2,
4.93052842396707084878E-2,
-9.49010970480476444210E-2,
1.71620901522208775349E-1,
-3.04682672343198398683E-1,
6.76795274409476084995E-1]
_i0B = [
-7.23318048787475395456E-18,
-4.83050448594418207126E-18,
4.46562142029675999901E-17,
3.46122286769746109310E-17,
-2.82762398051658348494E-16,
-3.42548561967721913462E-16,
1.77256013305652638360E-15,
3.81168066935262242075E-15,
-9.55484669882830764870E-15,
-4.15056934728722208663E-14,
1.54008621752140982691E-14,
3.85277838274214270114E-13,
7.18012445138366623367E-13,
-1.79417853150680611778E-12,
-1.32158118404477131188E-11,
-3.14991652796324136454E-11,
1.18891471078464383424E-11,
4.94060238822496958910E-10,
3.39623202570838634515E-9,
2.26666899049817806459E-8,
2.04891858946906374183E-7,
2.89137052083475648297E-6,
6.88975834691682398426E-5,
3.36911647825569408990E-3,
8.04490411014108831608E-1]
def _chbevl(x, vals):
b0 = vals[0]
b1 = 0.0
for i in range(1, len(vals)):
b2 = b1
b1 = b0
b0 = x*b1 - b2 + vals[i]
return 0.5*(b0 - b2)
def _i0_1(x):
return exp(x) * _chbevl(x/2.0-2, _i0A)
def _i0_2(x):
return exp(x) * _chbevl(32.0/x - 2.0, _i0B) / sqrt(x)
def i0(x):
"""
Modified Bessel function of the first kind, order 0.
Usually denoted :math:`I_0`. This function does broadcast, but will *not*
"up-cast" int dtype arguments unless accompanied by at least one float or
complex dtype argument (see Raises below).
Parameters
----------
x : array_like, dtype float or complex
Argument of the Bessel function.
Returns
-------
out : ndarray, shape = x.shape, dtype = x.dtype
The modified Bessel function evaluated at each of the elements of `x`.
Raises
------
TypeError: array cannot be safely cast to required type
If argument consists exclusively of int dtypes.
See Also
--------
scipy.special.iv, scipy.special.ive
Notes
-----
We use the algorithm published by Clenshaw [1]_ and referenced by
Abramowitz and Stegun [2]_, for which the function domain is
partitioned into the two intervals [0,8] and (8,inf), and Chebyshev
polynomial expansions are employed in each interval. Relative error on
the domain [0,30] using IEEE arithmetic is documented [3]_ as having a
peak of 5.8e-16 with an rms of 1.4e-16 (n = 30000).
References
----------
.. [1] C. W. Clenshaw, "Chebyshev series for mathematical functions", in
*National Physical Laboratory Mathematical Tables*, vol. 5, London:
Her Majesty's Stationery Office, 1962.
.. [2] M. Abramowitz and I. A. Stegun, *Handbook of Mathematical
Functions*, 10th printing, New York: Dover, 1964, pp. 379.
http://www.math.sfu.ca/~cbm/aands/page_379.htm
.. [3] http://kobesearch.cpan.org/htdocs/Math-Cephes/Math/Cephes.html
Examples
--------
>>> np.i0([0.])
array(1.0)
>>> np.i0([0., 1. + 2j])
array([ 1.00000000+0.j , 0.18785373+0.64616944j])
"""
x = atleast_1d(x).copy()
y = empty_like(x)
ind = (x<0)
x[ind] = -x[ind]
ind = (x<=8.0)
y[ind] = _i0_1(x[ind])
ind2 = ~ind
y[ind2] = _i0_2(x[ind2])
return y.squeeze()
## End of cephes code for i0
def kaiser(M, beta):
"""
Return the Kaiser window.
The Kaiser window is a taper formed by using a Bessel function.
Parameters
----------
M : int
Number of points in the output window. If zero or less, an
empty array is returned.
beta : float
Shape parameter for window.
Returns
-------
out : array
The window, with the maximum value normalized to one (the value
one appears only if the number of samples is odd).
See Also
--------
bartlett, blackman, hamming, hanning
Notes
-----
The Kaiser window is defined as
.. math:: w(n) = I_0\\left( \\beta \\sqrt{1-\\frac{4n^2}{(M-1)^2}}
\\right)/I_0(\\beta)
with
.. math:: \\quad -\\frac{M-1}{2} \\leq n \\leq \\frac{M-1}{2},
where :math:`I_0` is the modified zeroth-order Bessel function.
The Kaiser was named for Jim Kaiser, who discovered a simple
approximation to the DPSS window based on Bessel functions. The Kaiser
window is a very good approximation to the Digital Prolate Spheroidal
Sequence, or Slepian window, which is the transform which maximizes the
energy in the main lobe of the window relative to total energy.
The Kaiser can approximate many other windows by varying the beta
parameter.
==== =======================
beta Window shape
==== =======================
0 Rectangular
5 Similar to a Hamming
6 Similar to a Hanning
8.6 Similar to a Blackman
==== =======================
A beta value of 14 is probably a good starting point. Note that as beta
gets large, the window narrows, and so the number of samples needs to be
large enough to sample the increasingly narrow spike, otherwise NaNs will
get returned.
Most references to the Kaiser window come from the signal processing
literature, where it is used as one of many windowing functions for
smoothing values. It is also known as an apodization (which means
"removing the foot", i.e. smoothing discontinuities at the beginning
and end of the sampled signal) or tapering function.
References
----------
.. [1] J. F. Kaiser, "Digital Filters" - Ch 7 in "Systems analysis by
digital computer", Editors: F.F. Kuo and J.F. Kaiser, p 218-285.
John Wiley and Sons, New York, (1966).
.. [2] E.R. Kanasewich, "Time Sequence Analysis in Geophysics", The
University of Alberta Press, 1975, pp. 177-178.
.. [3] Wikipedia, "Window function",
http://en.wikipedia.org/wiki/Window_function
Examples
--------
>>> np.kaiser(12, 14)
array([ 7.72686684e-06, 3.46009194e-03, 4.65200189e-02,
2.29737120e-01, 5.99885316e-01, 9.45674898e-01,
9.45674898e-01, 5.99885316e-01, 2.29737120e-01,
4.65200189e-02, 3.46009194e-03, 7.72686684e-06])
Plot the window and the frequency response:
>>> from numpy.fft import fft, fftshift
>>> window = np.kaiser(51, 14)
>>> plt.plot(window)
[<matplotlib.lines.Line2D object at 0x...>]
>>> plt.title("Kaiser window")
<matplotlib.text.Text object at 0x...>
>>> plt.ylabel("Amplitude")
<matplotlib.text.Text object at 0x...>
>>> plt.xlabel("Sample")
<matplotlib.text.Text object at 0x...>
>>> plt.show()
>>> plt.figure()
<matplotlib.figure.Figure object at 0x...>
>>> A = fft(window, 2048) / 25.5
>>> mag = np.abs(fftshift(A))
>>> freq = np.linspace(-0.5, 0.5, len(A))
>>> response = 20 * np.log10(mag)
>>> response = np.clip(response, -100, 100)
>>> plt.plot(freq, response)
[<matplotlib.lines.Line2D object at 0x...>]
>>> plt.title("Frequency response of Kaiser window")
<matplotlib.text.Text object at 0x...>
>>> plt.ylabel("Magnitude [dB]")
<matplotlib.text.Text object at 0x...>
>>> plt.xlabel("Normalized frequency [cycles per sample]")
<matplotlib.text.Text object at 0x...>
>>> plt.axis('tight')
(-0.5, 0.5, -100.0, ...)
>>> plt.show()
"""
from numpy.dual import i0
if M == 1:
return np.array([1.])
n = arange(0, M)
alpha = (M-1)/2.0
return i0(beta * sqrt(1-((n-alpha)/alpha)**2.0))/i0(float(beta))
def sinc(x):
"""
Return the sinc function.
The sinc function is :math:`\\sin(\\pi x)/(\\pi x)`.
Parameters
----------
x : ndarray
Array (possibly multi-dimensional) of values for which to to
calculate ``sinc(x)``.
Returns
-------
out : ndarray
``sinc(x)``, which has the same shape as the input.
Notes
-----
``sinc(0)`` is the limit value 1.
The name sinc is short for "sine cardinal" or "sinus cardinalis".
The sinc function is used in various signal processing applications,
including in anti-aliasing, in the construction of a Lanczos resampling
filter, and in interpolation.
For bandlimited interpolation of discrete-time signals, the ideal
interpolation kernel is proportional to the sinc function.
References
----------
.. [1] Weisstein, Eric W. "Sinc Function." From MathWorld--A Wolfram Web
Resource. http://mathworld.wolfram.com/SincFunction.html
.. [2] Wikipedia, "Sinc function",
http://en.wikipedia.org/wiki/Sinc_function
Examples
--------
>>> x = np.linspace(-4, 4, 41)
>>> np.sinc(x)
array([ -3.89804309e-17, -4.92362781e-02, -8.40918587e-02,
-8.90384387e-02, -5.84680802e-02, 3.89804309e-17,
6.68206631e-02, 1.16434881e-01, 1.26137788e-01,
8.50444803e-02, -3.89804309e-17, -1.03943254e-01,
-1.89206682e-01, -2.16236208e-01, -1.55914881e-01,
3.89804309e-17, 2.33872321e-01, 5.04551152e-01,
7.56826729e-01, 9.35489284e-01, 1.00000000e+00,
9.35489284e-01, 7.56826729e-01, 5.04551152e-01,
2.33872321e-01, 3.89804309e-17, -1.55914881e-01,
-2.16236208e-01, -1.89206682e-01, -1.03943254e-01,
-3.89804309e-17, 8.50444803e-02, 1.26137788e-01,
1.16434881e-01, 6.68206631e-02, 3.89804309e-17,
-5.84680802e-02, -8.90384387e-02, -8.40918587e-02,
-4.92362781e-02, -3.89804309e-17])
>>> plt.plot(x, np.sinc(x))
[<matplotlib.lines.Line2D object at 0x...>]
>>> plt.title("Sinc Function")
<matplotlib.text.Text object at 0x...>
>>> plt.ylabel("Amplitude")
<matplotlib.text.Text object at 0x...>
>>> plt.xlabel("X")
<matplotlib.text.Text object at 0x...>
>>> plt.show()
It works in 2-D as well:
>>> x = np.linspace(-4, 4, 401)
>>> xx = np.outer(x, x)
>>> plt.imshow(np.sinc(xx))
<matplotlib.image.AxesImage object at 0x...>
"""
x = np.asanyarray(x)
y = pi* where(x == 0, 1.0e-20, x)
return sin(y)/y
def msort(a):
"""
Return a copy of an array sorted along the first axis.
Parameters
----------
a : array_like
Array to be sorted.
Returns
-------
sorted_array : ndarray
Array of the same type and shape as `a`.
See Also
--------
sort
Notes
-----
``np.msort(a)`` is equivalent to ``np.sort(a, axis=0)``.
"""
b = array(a, subok=True, copy=True)
b.sort(0)
return b
def median(a, axis=None, out=None, overwrite_input=False):
"""
Compute the median along the specified axis.
Returns the median of the array elements.
Parameters
----------
a : array_like
Input array or object that can be converted to an array.
axis : int, optional
Axis along which the medians are computed. The default (axis=None)
is to compute the median along a flattened version of the array.
out : ndarray, optional
Alternative output array in which to place the result. It must have
the same shape and buffer length as the expected output, but the
type (of the output) will be cast if necessary.
overwrite_input : bool, optional
If True, then allow use of memory of input array (a) for
calculations. The input array will be modified by the call to
median. This will save memory when you do not need to preserve the
contents of the input array. Treat the input as undefined, but it
will probably be fully or partially sorted. Default is False. Note
that, if `overwrite_input` is True and the input is not already an
ndarray, an error will be raised.
Returns
-------
median : ndarray
A new array holding the result (unless `out` is specified, in which
case that array is returned instead). If the input contains
integers, or floats of smaller precision than 64, then the output
data-type is float64. Otherwise, the output data-type is the same
as that of the input.
See Also
--------
mean, percentile
Notes
-----
Given a vector V of length N, the median of V is the middle value of
a sorted copy of V, ``V_sorted`` - i.e., ``V_sorted[(N-1)/2]``, when N is
odd. When N is even, it is the average of the two middle values of
``V_sorted``.
Examples
--------
>>> a = np.array([[10, 7, 4], [3, 2, 1]])
>>> a
array([[10, 7, 4],
[ 3, 2, 1]])
>>> np.median(a)
3.5
>>> np.median(a, axis=0)
array([ 6.5, 4.5, 2.5])
>>> np.median(a, axis=1)
array([ 7., 2.])
>>> m = np.median(a, axis=0)
>>> out = np.zeros_like(m)
>>> np.median(a, axis=0, out=m)
array([ 6.5, 4.5, 2.5])
>>> m
array([ 6.5, 4.5, 2.5])
>>> b = a.copy()
>>> np.median(b, axis=1, overwrite_input=True)
array([ 7., 2.])
>>> assert not np.all(a==b)
>>> b = a.copy()
>>> np.median(b, axis=None, overwrite_input=True)
3.5
>>> assert not np.all(a==b)
"""
a = np.asanyarray(a)
if axis is not None and axis >= a.ndim:
raise IndexError("axis %d out of bounds (%d)" % (axis, a.ndim))
if overwrite_input:
if axis is None:
part = a.ravel()
sz = part.size
if sz % 2 == 0:
szh = sz // 2
part.partition((szh - 1, szh))
else:
part.partition((sz - 1) // 2)
else:
sz = a.shape[axis]
if sz % 2 == 0:
szh = sz // 2
a.partition((szh - 1, szh), axis=axis)
else:
a.partition((sz - 1) // 2, axis=axis)
part = a
else:
if axis is None:
sz = a.size
else:
sz = a.shape[axis]
if sz % 2 == 0:
part = partition(a, ((sz // 2) - 1, sz // 2), axis=axis)
else:
part = partition(a, (sz - 1) // 2, axis=axis)
if part.shape == ():
# make 0-D arrays work
return part.item()
if axis is None:
axis = 0
indexer = [slice(None)] * part.ndim
index = part.shape[axis] // 2
if part.shape[axis] % 2 == 1:
# index with slice to allow mean (below) to work
indexer[axis] = slice(index, index+1)
else:
indexer[axis] = slice(index-1, index+1)
# Use mean in odd and even case to coerce data type
# and check, use out array.
return mean(part[indexer], axis=axis, out=out)
def percentile(a, q, axis=None, out=None, overwrite_input=False):
"""
Compute the qth percentile of the data along the specified axis.
Returns the qth percentile of the array elements.
Parameters
----------
a : array_like
Input array or object that can be converted to an array.
q : float in range of [0,100] (or sequence of floats)
Percentile to compute which must be between 0 and 100 inclusive.
axis : int, optional
Axis along which the percentiles are computed. The default (None)
is to compute the median along a flattened version of the array.
out : ndarray, optional
Alternative output array in which to place the result. It must
have the same shape and buffer length as the expected output,
but the type (of the output) will be cast if necessary.
overwrite_input : bool, optional
If True, then allow use of memory of input array `a` for
calculations. The input array will be modified by the call to
median. This will save memory when you do not need to preserve
the contents of the input array. Treat the input as undefined,
but it will probably be fully or partially sorted.
Default is False. Note that, if `overwrite_input` is True and the
input is not already an array, an error will be raised.
Returns
-------
percentile : scalar or ndarray
If a single percentile `q` is given and axis=None a scalar is
returned. If multiple percentiles `q` are given an array holding
the result is returned. The results are listed in the first axis.
(If `out` is specified, in which case that array is returned
instead). If the input contains integers, or floats of smaller
precision than 64, then the output data-type is float64. Otherwise,
the output data-type is the same as that of the input.
See Also
--------
mean, median
Notes
-----
Given a vector V of length N, the q-th percentile of V is the q-th ranked
value in a sorted copy of V. The values and distances of the two
nearest neighbors as well as the `interpolation` parameter will
determine the percentile if the normalized ranking does not match q
exactly. This function is the same as the median if ``q=50``, the same
as the minimum if ``q=0``and the same as the maximum if ``q=100``.
Examples
--------
>>> a = np.array([[10, 7, 4], [3, 2, 1]])
>>> a
array([[10, 7, 4],
[ 3, 2, 1]])
>>> np.percentile(a, 50)
3.5
>>> np.percentile(a, 50, axis=0)
array([ 6.5, 4.5, 2.5])
>>> np.percentile(a, 50, axis=1)
array([ 7., 2.])
>>> m = np.percentile(a, 50, axis=0)
>>> out = np.zeros_like(m)
>>> np.percentile(a, 50, axis=0, out=m)
array([ 6.5, 4.5, 2.5])
>>> m
array([ 6.5, 4.5, 2.5])
>>> b = a.copy()
>>> np.percentile(b, 50, axis=1, overwrite_input=True)
array([ 7., 2.])
>>> assert not np.all(a==b)
>>> b = a.copy()
>>> np.percentile(b, 50, axis=None, overwrite_input=True)
3.5
"""
a = np.asarray(a)
if q == 0:
return a.min(axis=axis, out=out)
elif q == 100:
return a.max(axis=axis, out=out)
if overwrite_input:
if axis is None:
sorted = a.ravel()
sorted.sort()
else:
a.sort(axis=axis)
sorted = a
else:
sorted = sort(a, axis=axis)
if axis is None:
axis = 0
return _compute_qth_percentile(sorted, q, axis, out)
# handle sequence of q's without calling sort multiple times
def _compute_qth_percentile(sorted, q, axis, out):
if not isscalar(q):
p = [_compute_qth_percentile(sorted, qi, axis, None)
for qi in q]
if out is not None:
out.flat = p
return p
q = q / 100.0
if (q < 0) or (q > 1):
raise ValueError("percentile must be either in the range [0,100]")
indexer = [slice(None)] * sorted.ndim
Nx = sorted.shape[axis]
index = q*(Nx-1)
i = int(index)
if i == index:
indexer[axis] = slice(i, i+1)
weights = array(1)
sumval = 1.0
else:
indexer[axis] = slice(i, i+2)
j = i + 1
weights = array([(j - index), (index - i)], float)
wshape = [1]*sorted.ndim
wshape[axis] = 2
weights.shape = wshape
sumval = weights.sum()
# Use add.reduce in both cases to coerce data type as well as
# check and use out array.
return add.reduce(sorted[indexer]*weights, axis=axis, out=out)/sumval
def trapz(y, x=None, dx=1.0, axis=-1):
"""
Integrate along the given axis using the composite trapezoidal rule.
Integrate `y` (`x`) along given axis.
Parameters
----------
y : array_like
Input array to integrate.
x : array_like, optional
If `x` is None, then spacing between all `y` elements is `dx`.
dx : scalar, optional
If `x` is None, spacing given by `dx` is assumed. Default is 1.
axis : int, optional
Specify the axis.
Returns
-------
trapz : float
Definite integral as approximated by trapezoidal rule.
See Also
--------
sum, cumsum
Notes
-----
Image [2]_ illustrates trapezoidal rule -- y-axis locations of points
will be taken from `y` array, by default x-axis distances between
points will be 1.0, alternatively they can be provided with `x` array
or with `dx` scalar. Return value will be equal to combined area under
the red lines.
References
----------
.. [1] Wikipedia page: http://en.wikipedia.org/wiki/Trapezoidal_rule
.. [2] Illustration image:
http://en.wikipedia.org/wiki/File:Composite_trapezoidal_rule_illustration.png
Examples
--------
>>> np.trapz([1,2,3])
4.0
>>> np.trapz([1,2,3], x=[4,6,8])
8.0
>>> np.trapz([1,2,3], dx=2)
8.0
>>> a = np.arange(6).reshape(2, 3)
>>> a
array([[0, 1, 2],
[3, 4, 5]])
>>> np.trapz(a, axis=0)
array([ 1.5, 2.5, 3.5])
>>> np.trapz(a, axis=1)
array([ 2., 8.])
"""
y = asanyarray(y)
if x is None:
d = dx
else:
x = asanyarray(x)
if x.ndim == 1:
d = diff(x)
# reshape to correct shape
shape = [1]*y.ndim
shape[axis] = d.shape[0]
d = d.reshape(shape)
else:
d = diff(x, axis=axis)
nd = len(y.shape)
slice1 = [slice(None)]*nd
slice2 = [slice(None)]*nd
slice1[axis] = slice(1, None)
slice2[axis] = slice(None, -1)
try:
ret = (d * (y[slice1] +y [slice2]) / 2.0).sum(axis)
except ValueError: # Operations didn't work, cast to ndarray
d = np.asarray(d)
y = np.asarray(y)
ret = add.reduce(d * (y[slice1]+y[slice2])/2.0, axis)
return ret
#always succeed
def add_newdoc(place, obj, doc):
"""Adds documentation to obj which is in module place.
If doc is a string add it to obj as a docstring
If doc is a tuple, then the first element is interpreted as
an attribute of obj and the second as the docstring
(method, docstring)
If doc is a list, then each element of the list should be a
sequence of length two --> [(method1, docstring1),
(method2, docstring2), ...]
This routine never raises an error.
This routine cannot modify read-only docstrings, as appear
in new-style classes or built-in functions. Because this
routine never raises an error the caller must check manually
that the docstrings were changed.
"""
try:
new = {}
exec('from %s import %s' % (place, obj), new)
if isinstance(doc, str):
add_docstring(new[obj], doc.strip())
elif isinstance(doc, tuple):
add_docstring(getattr(new[obj], doc[0]), doc[1].strip())
elif isinstance(doc, list):
for val in doc:
add_docstring(getattr(new[obj], val[0]), val[1].strip())
except:
pass
# Based on scitools meshgrid
def meshgrid(*xi, **kwargs):
"""
Return coordinate matrices from two or more coordinate vectors.
Make N-D coordinate arrays for vectorized evaluations of
N-D scalar/vector fields over N-D grids, given
one-dimensional coordinate arrays x1, x2,..., xn.
Parameters
----------
x1, x2,..., xn : array_like
1-D arrays representing the coordinates of a grid.
indexing : {'xy', 'ij'}, optional
Cartesian ('xy', default) or matrix ('ij') indexing of output.
See Notes for more details.
sparse : bool, optional
If True a sparse grid is returned in order to conserve memory.
Default is False.
copy : bool, optional
If False, a view into the original arrays are returned in order to
conserve memory. Default is True. Please note that
``sparse=False, copy=False`` will likely return non-contiguous
arrays. Furthermore, more than one element of a broadcast array
may refer to a single memory location. If you need to write to the
arrays, make copies first.
Returns
-------
X1, X2,..., XN : ndarray
For vectors `x1`, `x2`,..., 'xn' with lengths ``Ni=len(xi)`` ,
return ``(N1, N2, N3,...Nn)`` shaped arrays if indexing='ij'
or ``(N2, N1, N3,...Nn)`` shaped arrays if indexing='xy'
with the elements of `xi` repeated to fill the matrix along
the first dimension for `x1`, the second for `x2` and so on.
Notes
-----
This function supports both indexing conventions through the indexing
keyword argument. Giving the string 'ij' returns a meshgrid with
matrix indexing, while 'xy' returns a meshgrid with Cartesian indexing.
In the 2-D case with inputs of length M and N, the outputs are of shape
(N, M) for 'xy' indexing and (M, N) for 'ij' indexing. In the 3-D case
with inputs of length M, N and P, outputs are of shape (N, M, P) for
'xy' indexing and (M, N, P) for 'ij' indexing. The difference is
illustrated by the following code snippet::
xv, yv = meshgrid(x, y, sparse=False, indexing='ij')
for i in range(nx):
for j in range(ny):
# treat xv[i,j], yv[i,j]
xv, yv = meshgrid(x, y, sparse=False, indexing='xy')
for i in range(nx):
for j in range(ny):
# treat xv[j,i], yv[j,i]
In the 1-D and 0-D case, the indexing and sparse keywords have no
effect.
See Also
--------
index_tricks.mgrid : Construct a multi-dimensional "meshgrid"
using indexing notation.
index_tricks.ogrid : Construct an open multi-dimensional "meshgrid"
using indexing notation.
Examples
--------
>>> nx, ny = (3, 2)
>>> x = np.linspace(0, 1, nx)
>>> y = np.linspace(0, 1, ny)
>>> xv, yv = meshgrid(x, y)
>>> xv
array([[ 0. , 0.5, 1. ],
[ 0. , 0.5, 1. ]])
>>> yv
array([[ 0., 0., 0.],
[ 1., 1., 1.]])
>>> xv, yv = meshgrid(x, y, sparse=True) # make sparse output arrays
>>> xv
array([[ 0. , 0.5, 1. ]])
>>> yv
array([[ 0.],
[ 1.]])
`meshgrid` is very useful to evaluate functions on a grid.
>>> x = np.arange(-5, 5, 0.1)
>>> y = np.arange(-5, 5, 0.1)
>>> xx, yy = meshgrid(x, y, sparse=True)
>>> z = np.sin(xx**2 + yy**2) / (xx**2 + yy**2)
>>> h = plt.contourf(x,y,z)
"""
if len(xi) < 2:
msg = 'meshgrid() takes 2 or more arguments (%d given)' % int(len(xi) > 0)
raise ValueError(msg)
args = np.atleast_1d(*xi)
ndim = len(args)
copy_ = kwargs.get('copy', True)
sparse = kwargs.get('sparse', False)
indexing = kwargs.get('indexing', 'xy')
if not indexing in ['xy', 'ij']:
raise ValueError("Valid values for `indexing` are 'xy' and 'ij'.")
s0 = (1,) * ndim
output = [x.reshape(s0[:i] + (-1,) + s0[i + 1::]) for i, x in enumerate(args)]
shape = [x.size for x in output]
if indexing == 'xy':
# switch first and second axis
output[0].shape = (1, -1) + (1,)*(ndim - 2)
output[1].shape = (-1, 1) + (1,)*(ndim - 2)
shape[0], shape[1] = shape[1], shape[0]
if sparse:
if copy_:
return [x.copy() for x in output]
else:
return output
else:
# Return the full N-D matrix (not only the 1-D vector)
if copy_:
mult_fact = np.ones(shape, dtype=int)
return [x * mult_fact for x in output]
else:
return np.broadcast_arrays(*output)
def delete(arr, obj, axis=None):
"""
Return a new array with sub-arrays along an axis deleted. For a one
dimensional array, this returns those entries not returned by
`arr[obj]`.
Parameters
----------
arr : array_like
Input array.
obj : slice, int or array of ints
Indicate which sub-arrays to remove.
axis : int, optional
The axis along which to delete the subarray defined by `obj`.
If `axis` is None, `obj` is applied to the flattened array.
Returns
-------
out : ndarray
A copy of `arr` with the elements specified by `obj` removed. Note
that `delete` does not occur in-place. If `axis` is None, `out` is
a flattened array.
See Also
--------
insert : Insert elements into an array.
append : Append elements at the end of an array.
Notes
-----
Often it is preferable to use a boolean mask. For example:
>>> mask = np.ones(len(arr), dtype=bool)
>>> mask[[0,2,4]] = False
>>> result = arr[mask,...]
Is equivalent to `np.delete(arr, [0,2,4], axis=0)`, but allows further
use of `mask`.
Examples
--------
>>> arr = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
>>> arr
array([[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9, 10, 11, 12]])
>>> np.delete(arr, 1, 0)
array([[ 1, 2, 3, 4],
[ 9, 10, 11, 12]])
>>> np.delete(arr, np.s_[::2], 1)
array([[ 2, 4],
[ 6, 8],
[10, 12]])
>>> np.delete(arr, [1,3,5], None)
array([ 1, 3, 5, 7, 8, 9, 10, 11, 12])
"""
wrap = None
if type(arr) is not ndarray:
try:
wrap = arr.__array_wrap__
except AttributeError:
pass
arr = asarray(arr)
ndim = arr.ndim
if axis is None:
if ndim != 1:
arr = arr.ravel()
ndim = arr.ndim;
axis = ndim-1;
if ndim == 0:
warnings.warn("in the future the special handling of scalars "
"will be removed from delete and raise an error",
DeprecationWarning)
if wrap:
return wrap(arr)
else:
return arr.copy()
slobj = [slice(None)]*ndim
N = arr.shape[axis]
newshape = list(arr.shape)
if isinstance(obj, slice):
start, stop, step = obj.indices(N)
xr = range(start, stop, step)
numtodel = len(xr)
if numtodel <= 0:
if wrap:
return wrap(arr.copy())
else:
return arr.copy()
# Invert if step is negative:
if step < 0:
step = -step
start = xr[-1]
stop = xr[0] + 1
newshape[axis] -= numtodel
new = empty(newshape, arr.dtype, arr.flags.fnc)
# copy initial chunk
if start == 0:
pass
else:
slobj[axis] = slice(None, start)
new[slobj] = arr[slobj]
# copy end chunck
if stop == N:
pass
else:
slobj[axis] = slice(stop-numtodel, None)
slobj2 = [slice(None)]*ndim
slobj2[axis] = slice(stop, None)
new[slobj] = arr[slobj2]
# copy middle pieces
if step == 1:
pass
else: # use array indexing.
keep = ones(stop-start, dtype=bool)
keep[:stop-start:step] = False
slobj[axis] = slice(start, stop-numtodel)
slobj2 = [slice(None)]*ndim
slobj2[axis] = slice(start, stop)
arr = arr[slobj2]
slobj2[axis] = keep
new[slobj] = arr[slobj2]
if wrap:
return wrap(new)
else:
return new
_obj = obj
obj = np.asarray(obj)
# After removing the special handling of booleans and out of
# bounds values, the conversion to the array can be removed.
if obj.dtype == bool:
warnings.warn("in the future insert will treat boolean arrays "
"and array-likes as boolean index instead "
"of casting it to integer", FutureWarning)
obj = obj.astype(intp)
if isinstance(_obj, (int, long, integer)):
# optimization for a single value
obj = obj.item()
if (obj < -N or obj >=N):
raise IndexError("index %i is out of bounds for axis "
"%i with size %i" % (obj, axis, N))
if (obj < 0): obj += N
newshape[axis]-=1;
new = empty(newshape, arr.dtype, arr.flags.fnc)
slobj[axis] = slice(None, obj)
new[slobj] = arr[slobj]
slobj[axis] = slice(obj, None)
slobj2 = [slice(None)]*ndim
slobj2[axis] = slice(obj+1, None)
new[slobj] = arr[slobj2]
else:
if obj.size == 0 and not isinstance(_obj, np.ndarray):
obj = obj.astype(intp)
if not np.can_cast(obj, intp, 'same_kind'):
# obj.size = 1 special case always failed and would just
# give superfluous warnings.
warnings.warn("using a non-integer array as obj in delete "
"will result in an error in the future", DeprecationWarning)
obj = obj.astype(intp)
keep = ones(N, dtype=bool)
# Test if there are out of bound indices, this is deprecated
inside_bounds = (obj < N) & (obj >= -N)
if not inside_bounds.all():
warnings.warn("in the future out of bounds indices will raise an "
"error instead of being ignored by `numpy.delete`.",
DeprecationWarning)
obj = obj[inside_bounds]
positive_indices = obj >= 0
if not positive_indices.all():
warnings.warn("in the future negative indices will not be ignored "
"by `numpy.delete`.", FutureWarning)
obj = obj[positive_indices]
keep[obj,] = False
slobj[axis] = keep
new = arr[slobj]
if wrap:
return wrap(new)
else:
return new
def insert(arr, obj, values, axis=None):
"""
Insert values along the given axis before the given indices.
Parameters
----------
arr : array_like
Input array.
obj : int, slice or sequence of ints
Object that defines the index or indices before which `values` is
inserted.
.. versionadded:: 1.8.0
Support for multiple insertions when `obj` is a single scalar or a
sequence with one element (similar to calling insert multiple
times).
values : array_like
Values to insert into `arr`. If the type of `values` is different
from that of `arr`, `values` is converted to the type of `arr`.
`values` should be shaped so that ``arr[...,obj,...] = values``
is legal.
axis : int, optional
Axis along which to insert `values`. If `axis` is None then `arr`
is flattened first.
Returns
-------
out : ndarray
A copy of `arr` with `values` inserted. Note that `insert`
does not occur in-place: a new array is returned. If
`axis` is None, `out` is a flattened array.
See Also
--------
append : Append elements at the end of an array.
concatenate : Join a sequence of arrays together.
delete : Delete elements from an array.
Notes
-----
Note that for higher dimensional inserts `obj=0` behaves very different
from `obj=[0]` just like `arr[:,0,:] = values` is different from
`arr[:,[0],:] = values`.
Examples
--------
>>> a = np.array([[1, 1], [2, 2], [3, 3]])
>>> a
array([[1, 1],
[2, 2],
[3, 3]])
>>> np.insert(a, 1, 5)
array([1, 5, 1, 2, 2, 3, 3])
>>> np.insert(a, 1, 5, axis=1)
array([[1, 5, 1],
[2, 5, 2],
[3, 5, 3]])
Difference between sequence and scalars:
>>> np.insert(a, [1], [[1],[2],[3]], axis=1)
array([[1, 1, 1],
[2, 2, 2],
[3, 3, 3]])
>>> np.array_equal(np.insert(a, 1, [1, 2, 3], axis=1),
... np.insert(a, [1], [[1],[2],[3]], axis=1))
True
>>> b = a.flatten()
>>> b
array([1, 1, 2, 2, 3, 3])
>>> np.insert(b, [2, 2], [5, 6])
array([1, 1, 5, 6, 2, 2, 3, 3])
>>> np.insert(b, slice(2, 4), [5, 6])
array([1, 1, 5, 2, 6, 2, 3, 3])
>>> np.insert(b, [2, 2], [7.13, False]) # type casting
array([1, 1, 7, 0, 2, 2, 3, 3])
>>> x = np.arange(8).reshape(2, 4)
>>> idx = (1, 3)
>>> np.insert(x, idx, 999, axis=1)
array([[ 0, 999, 1, 2, 999, 3],
[ 4, 999, 5, 6, 999, 7]])
"""
wrap = None
if type(arr) is not ndarray:
try:
wrap = arr.__array_wrap__
except AttributeError:
pass
arr = asarray(arr)
ndim = arr.ndim
if axis is None:
if ndim != 1:
arr = arr.ravel()
ndim = arr.ndim
axis = ndim-1
else:
if ndim > 0 and (axis < -ndim or axis >= ndim):
raise IndexError("axis %i is out of bounds for an array "
"of dimension %i" % (axis, ndim))
if (axis < 0): axis += ndim
if (ndim == 0):
warnings.warn("in the future the special handling of scalars "
"will be removed from insert and raise an error",
DeprecationWarning)
arr = arr.copy()
arr[...] = values
if wrap:
return wrap(arr)
else:
return arr
slobj = [slice(None)]*ndim
N = arr.shape[axis]
newshape = list(arr.shape)
if isinstance(obj, slice):
# turn it into a range object
indices = arange(*obj.indices(N),**{'dtype':intp})
else:
# need to copy obj, because indices will be changed in-place
indices = np.array(obj)
if indices.dtype == bool:
# See also delete
warnings.warn("in the future insert will treat boolean arrays "
"and array-likes as a boolean index instead "
"of casting it to integer", FutureWarning)
indices = indices.astype(intp)
# Code after warning period:
#if obj.ndim != 1:
# raise ValueError('boolean array argument obj to insert '
# 'must be one dimensional')
#indices = np.flatnonzero(obj)
elif indices.ndim > 1:
raise ValueError("index array argument obj to insert must "
"be one dimensional or scalar")
if indices.size == 1:
index = indices.item()
if index < -N or index > N:
raise IndexError("index %i is out of bounds for axis "
"%i with size %i" % (obj, axis, N))
if (index < 0): index += N
values = array(values, copy=False, ndmin=arr.ndim)
if indices.ndim == 0:
# broadcasting is very different here, since a[:,0,:] = ... behaves
# very different from a[:,[0],:] = ...! This changes values so that
# it works likes the second case. (here a[:,0:1,:])
values = np.rollaxis(values, 0, axis + 1)
numnew = values.shape[axis]
newshape[axis] += numnew
new = empty(newshape, arr.dtype, arr.flags.fnc)
slobj[axis] = slice(None, index)
new[slobj] = arr[slobj]
slobj[axis] = slice(index, index+numnew)
new[slobj] = values
slobj[axis] = slice(index+numnew, None)
slobj2 = [slice(None)] * ndim
slobj2[axis] = slice(index, None)
new[slobj] = arr[slobj2]
if wrap:
return wrap(new)
return new
elif indices.size == 0 and not isinstance(obj, np.ndarray):
# Can safely cast the empty list to intp
indices = indices.astype(intp)
if not np.can_cast(indices, intp, 'same_kind'):
warnings.warn("using a non-integer array as obj in insert "
"will result in an error in the future",
DeprecationWarning)
indices = indices.astype(intp)
indices[indices < 0] += N
numnew = len(indices)
order = indices.argsort(kind='mergesort') # stable sort
indices[order] += np.arange(numnew)
newshape[axis] += numnew
old_mask = ones(newshape[axis], dtype=bool)
old_mask[indices] = False
new = empty(newshape, arr.dtype, arr.flags.fnc)
slobj2 = [slice(None)]*ndim
slobj[axis] = indices
slobj2[axis] = old_mask
new[slobj] = values
new[slobj2] = arr
if wrap:
return wrap(new)
return new
def append(arr, values, axis=None):
"""
Append values to the end of an array.
Parameters
----------
arr : array_like
Values are appended to a copy of this array.
values : array_like
These values are appended to a copy of `arr`. It must be of the
correct shape (the same shape as `arr`, excluding `axis`). If
`axis` is not specified, `values` can be any shape and will be
flattened before use.
axis : int, optional
The axis along which `values` are appended. If `axis` is not
given, both `arr` and `values` are flattened before use.
Returns
-------
append : ndarray
A copy of `arr` with `values` appended to `axis`. Note that
`append` does not occur in-place: a new array is allocated and
filled. If `axis` is None, `out` is a flattened array.
See Also
--------
insert : Insert elements into an array.
delete : Delete elements from an array.
Examples
--------
>>> np.append([1, 2, 3], [[4, 5, 6], [7, 8, 9]])
array([1, 2, 3, 4, 5, 6, 7, 8, 9])
When `axis` is specified, `values` must have the correct shape.
>>> np.append([[1, 2, 3], [4, 5, 6]], [[7, 8, 9]], axis=0)
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
>>> np.append([[1, 2, 3], [4, 5, 6]], [7, 8, 9], axis=0)
Traceback (most recent call last):
...
ValueError: arrays must have same number of dimensions
"""
arr = asanyarray(arr)
if axis is None:
if arr.ndim != 1:
arr = arr.ravel()
values = ravel(values)
axis = arr.ndim-1
return concatenate((arr, values), axis=axis)
| mit |
hsuantien/scikit-learn | examples/covariance/plot_outlier_detection.py | 235 | 3891 | """
==========================================
Outlier detection with several methods.
==========================================
When the amount of contamination is known, this example illustrates two
different ways of performing :ref:`outlier_detection`:
- based on a robust estimator of covariance, which is assuming that the
data are Gaussian distributed and performs better than the One-Class SVM
in that case.
- using the One-Class SVM and its ability to capture the shape of the
data set, hence performing better when the data is strongly
non-Gaussian, i.e. with two well-separated clusters;
The ground truth about inliers and outliers is given by the points colors
while the orange-filled area indicates which points are reported as inliers
by each method.
Here, we assume that we know the fraction of outliers in the datasets.
Thus rather than using the 'predict' method of the objects, we set the
threshold on the decision_function to separate out the corresponding
fraction.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.font_manager
from scipy import stats
from sklearn import svm
from sklearn.covariance import EllipticEnvelope
# Example settings
n_samples = 200
outliers_fraction = 0.25
clusters_separation = [0, 1, 2]
# define two outlier detection tools to be compared
classifiers = {
"One-Class SVM": svm.OneClassSVM(nu=0.95 * outliers_fraction + 0.05,
kernel="rbf", gamma=0.1),
"robust covariance estimator": EllipticEnvelope(contamination=.1)}
# Compare given classifiers under given settings
xx, yy = np.meshgrid(np.linspace(-7, 7, 500), np.linspace(-7, 7, 500))
n_inliers = int((1. - outliers_fraction) * n_samples)
n_outliers = int(outliers_fraction * n_samples)
ground_truth = np.ones(n_samples, dtype=int)
ground_truth[-n_outliers:] = 0
# Fit the problem with varying cluster separation
for i, offset in enumerate(clusters_separation):
np.random.seed(42)
# Data generation
X1 = 0.3 * np.random.randn(0.5 * n_inliers, 2) - offset
X2 = 0.3 * np.random.randn(0.5 * n_inliers, 2) + offset
X = np.r_[X1, X2]
# Add outliers
X = np.r_[X, np.random.uniform(low=-6, high=6, size=(n_outliers, 2))]
# Fit the model with the One-Class SVM
plt.figure(figsize=(10, 5))
for i, (clf_name, clf) in enumerate(classifiers.items()):
# fit the data and tag outliers
clf.fit(X)
y_pred = clf.decision_function(X).ravel()
threshold = stats.scoreatpercentile(y_pred,
100 * outliers_fraction)
y_pred = y_pred > threshold
n_errors = (y_pred != ground_truth).sum()
# plot the levels lines and the points
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
subplot = plt.subplot(1, 2, i + 1)
subplot.set_title("Outlier detection")
subplot.contourf(xx, yy, Z, levels=np.linspace(Z.min(), threshold, 7),
cmap=plt.cm.Blues_r)
a = subplot.contour(xx, yy, Z, levels=[threshold],
linewidths=2, colors='red')
subplot.contourf(xx, yy, Z, levels=[threshold, Z.max()],
colors='orange')
b = subplot.scatter(X[:-n_outliers, 0], X[:-n_outliers, 1], c='white')
c = subplot.scatter(X[-n_outliers:, 0], X[-n_outliers:, 1], c='black')
subplot.axis('tight')
subplot.legend(
[a.collections[0], b, c],
['learned decision function', 'true inliers', 'true outliers'],
prop=matplotlib.font_manager.FontProperties(size=11))
subplot.set_xlabel("%d. %s (errors: %d)" % (i + 1, clf_name, n_errors))
subplot.set_xlim((-7, 7))
subplot.set_ylim((-7, 7))
plt.subplots_adjust(0.04, 0.1, 0.96, 0.94, 0.1, 0.26)
plt.show()
| bsd-3-clause |
ric2b/Vivaldi-browser | chromium/tools/perf/cli_tools/soundwave/tables/alerts_test.py | 10 | 3647 | # Copyright 2018 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import datetime
import unittest
from cli_tools.soundwave import tables
from core.external_modules import pandas
@unittest.skipIf(pandas is None, 'pandas not available')
class TestAlerts(unittest.TestCase):
def testDataFrameFromJson(self):
data = {
'anomalies': [
{
'key': 'abc123',
'timestamp': '2009-02-13T23:31:30.000',
'testsuite': 'loading.mobile',
'test': 'timeToFirstInteractive/Google',
'master': 'ChromiumPerf',
'bot': 'android-nexus5',
'start_revision': 12345,
'end_revision': 12543,
'median_before_anomaly': 2037.18,
'median_after_anomaly': 2135.540,
'units': 'ms',
'improvement': False,
'bug_id': 55555,
'bisect_status': 'started',
},
{
'key': 'xyz567',
'timestamp': '2009-02-13T23:31:30.000',
'testsuite': 'loading.mobile',
'test': 'timeToFirstInteractive/Wikipedia',
'master': 'ChromiumPerf',
'bot': 'android-nexus5',
'start_revision': 12345,
'end_revision': 12543,
'median_before_anomaly': 2037.18,
'median_after_anomaly': 2135.540,
'units': 'ms',
'improvement': False,
'bug_id': None,
'bisect_status': 'started',
}
]
}
alerts = tables.alerts.DataFrameFromJson(data)
self.assertEqual(len(alerts), 2)
alert = alerts.loc['abc123']
self.assertEqual(alert['timestamp'], datetime.datetime(
year=2009, month=2, day=13, hour=23, minute=31, second=30))
self.assertEqual(alert['bot'], 'ChromiumPerf/android-nexus5')
self.assertEqual(alert['test_suite'], 'loading.mobile')
self.assertEqual(alert['test_case'], 'Google')
self.assertEqual(alert['measurement'], 'timeToFirstInteractive')
self.assertEqual(alert['bug_id'], 55555)
self.assertEqual(alert['status'], 'triaged')
# We expect bug_id's to be integers.
self.assertTrue(pandas.api.types.is_integer_dtype(alerts['bug_id'].dtype))
# Missing bug_id's become 0.
self.assertEqual(alerts.loc['xyz567']['bug_id'], 0)
def testDataFrameFromJson_withSummaryMetric(self):
data = {
'anomalies': [
{
'key': 'abc123',
'timestamp': '2009-02-13T23:31:30.000',
'testsuite': 'loading.mobile',
'test': 'timeToFirstInteractive',
'master': 'ChromiumPerf',
'bot': 'android-nexus5',
'start_revision': 12345,
'end_revision': 12543,
'median_before_anomaly': 2037.18,
'median_after_anomaly': 2135.540,
'units': 'ms',
'improvement': False,
'bug_id': 55555,
'bisect_status': 'started',
}
]
}
alerts = tables.alerts.DataFrameFromJson(data)
self.assertEqual(len(alerts), 1)
alert = alerts.loc['abc123']
self.assertEqual(alert['measurement'], 'timeToFirstInteractive')
self.assertIsNone(alert['test_case'])
def testDataFrameFromJson_noAlerts(self):
data = {'anomalies': []}
alerts = tables.alerts.DataFrameFromJson(data)
self.assertEqual(len(alerts), 0)
| bsd-3-clause |
ldirer/scikit-learn | sklearn/__check_build/__init__.py | 345 | 1671 | """ Module to give helpful messages to the user that did not
compile the scikit properly.
"""
import os
INPLACE_MSG = """
It appears that you are importing a local scikit-learn source tree. For
this, you need to have an inplace install. Maybe you are in the source
directory and you need to try from another location."""
STANDARD_MSG = """
If you have used an installer, please check that it is suited for your
Python version, your operating system and your platform."""
def raise_build_error(e):
# Raise a comprehensible error and list the contents of the
# directory to help debugging on the mailing list.
local_dir = os.path.split(__file__)[0]
msg = STANDARD_MSG
if local_dir == "sklearn/__check_build":
# Picking up the local install: this will work only if the
# install is an 'inplace build'
msg = INPLACE_MSG
dir_content = list()
for i, filename in enumerate(os.listdir(local_dir)):
if ((i + 1) % 3):
dir_content.append(filename.ljust(26))
else:
dir_content.append(filename + '\n')
raise ImportError("""%s
___________________________________________________________________________
Contents of %s:
%s
___________________________________________________________________________
It seems that scikit-learn has not been built correctly.
If you have installed scikit-learn from source, please do not forget
to build the package before using it: run `python setup.py install` or
`make` in the source directory.
%s""" % (e, local_dir, ''.join(dir_content).strip(), msg))
try:
from ._check_build import check_build
except ImportError as e:
raise_build_error(e)
| bsd-3-clause |
jor-/scipy | doc/source/tutorial/examples/optimize_global_1.py | 15 | 1752 | import numpy as np
import matplotlib.pyplot as plt
from scipy import optimize
def eggholder(x):
return (-(x[1] + 47) * np.sin(np.sqrt(abs(x[0]/2 + (x[1] + 47))))
-x[0] * np.sin(np.sqrt(abs(x[0] - (x[1] + 47)))))
bounds = [(-512, 512), (-512, 512)]
x = np.arange(-512, 513)
y = np.arange(-512, 513)
xgrid, ygrid = np.meshgrid(x, y)
xy = np.stack([xgrid, ygrid])
results = dict()
results['shgo'] = optimize.shgo(eggholder, bounds)
results['DA'] = optimize.dual_annealing(eggholder, bounds)
results['DE'] = optimize.differential_evolution(eggholder, bounds)
results['BH'] = optimize.basinhopping(eggholder, bounds)
results['shgo_sobol'] = optimize.shgo(eggholder, bounds, n=200, iters=5,
sampling_method='sobol')
fig = plt.figure(figsize=(4.5, 4.5))
ax = fig.add_subplot(111)
im = ax.imshow(eggholder(xy), interpolation='bilinear', origin='lower',
cmap='gray')
ax.set_xlabel('x')
ax.set_ylabel('y')
def plot_point(res, marker='o', color=None):
ax.plot(512+res.x[0], 512+res.x[1], marker=marker, color=color, ms=10)
plot_point(results['BH'], color='y') # basinhopping - yellow
plot_point(results['DE'], color='c') # differential_evolution - cyan
plot_point(results['DA'], color='w') # dual_annealing. - white
# SHGO produces multiple minima, plot them all (with a smaller marker size)
plot_point(results['shgo'], color='r', marker='+')
plot_point(results['shgo_sobol'], color='r', marker='x')
for i in range(results['shgo_sobol'].xl.shape[0]):
ax.plot(512 + results['shgo_sobol'].xl[i, 0],
512 + results['shgo_sobol'].xl[i, 1],
'ro', ms=2)
ax.set_xlim([-4, 514*2])
ax.set_ylim([-4, 514*2])
fig.tight_layout()
plt.show()
| bsd-3-clause |
bytefish/facerec | py/apps/scripts/preprocessing_experiments.py | 1 | 5626 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright (c) Philipp Wagner. All rights reserved.
# Licensed under the BSD license. See LICENSE file in the project root for full license information.
import sys, os
sys.path.append("../..")
from builtins import range
# facerec
from facerec.feature import Fisherfaces, PCA, SpatialHistogram, Identity
from facerec.distance import EuclideanDistance
from facerec.classifier import NearestNeighbor
from facerec.model import PredictableModel
from facerec.validation import KFoldCrossValidation
from facerec.visual import subplot
from facerec.util import minmax_normalize
# required libraries
import numpy as np
# try to import the PIL Image module
try:
from PIL import Image
except ImportError:
import Image
import logging
import matplotlib.cm as cm
class FileNameFilter:
def __init__(self, name):
self._name = name
def __call__(self, filename):
return True
def __repr__(self):
return "FileNameFilter (name=%s)" % (self._name)
class YaleBaseFilter(FileNameFilter):
def __init__(self, min_azimuth, max_azimuth, min_elevation, max_elevation):
FileNameFilter.__init__(self, "Filter YaleFDB Subset1")
self._min_azimuth = min_azimuth
self._max_azimuth = max_azimuth
self._min_elevation = min_elevation
self._max_elevation = max_elevation
def __call__(self, filename):
# We only want the PGM files:
filetype = filename[-4:]
if filetype != ".pgm":
return False
# There are "Ambient" PGM files, ignore them:
if "Ambient" in filename:
return False
azimuth = int(filename[12:16])
elevation = int(filename[17:20])
# Now filter based on angles:
if azimuth < self._min_azimuth or azimuth > self._max_azimuth:
return False
if elevation < self._min_elevation or elevation > self._max_elevation:
return False
return True
def read_images(path, fileNameFilter=FileNameFilter("None"), sz=None):
"""Reads the images in a given folder, resizes images on the fly if size is given.
Args:
path: Path to a folder with subfolders representing the subjects (persons).
sz: A tuple with the size Resizes
Returns:
A list [X,y]
X: The images, which is a Python list of numpy arrays.
y: The corresponding labels (the unique number of the subject, person) in a Python list.
"""
c = 0
X,y = [], []
for dirname, dirnames, filenames in os.walk(path):
for subdirname in dirnames:
subject_path = os.path.join(dirname, subdirname)
for filename in os.listdir(subject_path):
if fileNameFilter(filename):
print(filename)
try:
im = Image.open(os.path.join(subject_path, filename))
im = im.convert("L")
# resize to given size (if given)
if (sz is not None):
im = im.resize(sz, Image.ANTIALIAS)
X.append(np.asarray(im, dtype=np.uint8))
y.append(c)
except IOError as e:
print("I/O error: {0}".format(e))
raise e
except:
print("Unexpected error: {0}".format(sys.exc_info()[0]))
raise
c = c+1
return [X,y]
if __name__ == "__main__":
# This is where we write the images, if an output_dir is given
# in command line:
out_dir = None
# You'll need at least a path to your image data, please see
# the tutorial coming with this source code on how to prepare
# your image data:
if len(sys.argv) < 2:
print("USAGE: facerec_demo.py </path/to/images>")
sys.exit()
yale_filter = YaleBaseFilter(-25, 25, -25, 25)
# Now read in the image data. This must be a valid path!
[X,y] = read_images(sys.argv[1], yale_filter)
# Then set up a handler for logging:
handler = logging.StreamHandler(sys.stdout)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
# Add handler to facerec modules, so we see what's going on inside:
logger = logging.getLogger("facerec")
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
# Define the Fisherfaces as Feature Extraction method:
feature = PCA()
# Define a 1-NN classifier with Euclidean Distance:
classifier = NearestNeighbor(dist_metric=EuclideanDistance(), k=1)
# Define the model as the combination
model = PredictableModel(feature=feature, classifier=classifier)
# Compute the Fisherfaces on the given data (in X) and labels (in y):
model.compute(X, y)
# Then turn the first (at most) 16 eigenvectors into grayscale
# images (note: eigenvectors are stored by column!)
E = []
for i in range(min(model.feature.eigenvectors.shape[1], 16)):
e = model.feature.eigenvectors[:,i].reshape(X[0].shape)
E.append(minmax_normalize(e,0,255, dtype=np.uint8))
# Plot them and store the plot to "python_fisherfaces_fisherfaces.pdf"
subplot(title="Fisherfaces", images=E, rows=4, cols=4, sptitle="Fisherface", colormap=cm.jet, filename="fisherfaces.png")
# Perform a 10-fold cross validation
cv = KFoldCrossValidation(model, k=10)
cv.validate(X, y)
# And print the result:
cv.print_results()
| bsd-3-clause |
pkruskal/scikit-learn | sklearn/covariance/__init__.py | 389 | 1157 | """
The :mod:`sklearn.covariance` module includes methods and algorithms to
robustly estimate the covariance of features given a set of points. The
precision matrix defined as the inverse of the covariance is also estimated.
Covariance estimation is closely related to the theory of Gaussian Graphical
Models.
"""
from .empirical_covariance_ import empirical_covariance, EmpiricalCovariance, \
log_likelihood
from .shrunk_covariance_ import shrunk_covariance, ShrunkCovariance, \
ledoit_wolf, ledoit_wolf_shrinkage, \
LedoitWolf, oas, OAS
from .robust_covariance import fast_mcd, MinCovDet
from .graph_lasso_ import graph_lasso, GraphLasso, GraphLassoCV
from .outlier_detection import EllipticEnvelope
__all__ = ['EllipticEnvelope',
'EmpiricalCovariance',
'GraphLasso',
'GraphLassoCV',
'LedoitWolf',
'MinCovDet',
'OAS',
'ShrunkCovariance',
'empirical_covariance',
'fast_mcd',
'graph_lasso',
'ledoit_wolf',
'ledoit_wolf_shrinkage',
'log_likelihood',
'oas',
'shrunk_covariance']
| bsd-3-clause |
legacysurvey/legacypipe | py/legacyanalysis/compare-to-ps1.py | 2 | 30435 | from __future__ import print_function
import matplotlib
matplotlib.use('Agg')
import pylab as plt
import numpy as np
import os
import sys
from astrometry.util.fits import fits_table
from astrometry.libkd.spherematch import match_radec
from astrometry.util.plotutils import PlotSequence
from legacyanalysis.ps1cat import ps1cat, ps1_to_decam
from legacypipe.survey import *
'''
pixsc = 0.262
apr = [1.0, 2.0, 3.5] / pixsc
#-> aperture photometry radius in pixels
decstat -- aper, img, ..., apr
-> allmags
mags = reform(allmags[2,ii])
Skyrad_pix -- default 7 to 10 pixel radius in pixels
skyrad_pix = skyrad/pixsc ; sky radii in pixels
image.py -- SE called with PIXEL_SCALE 0 -> determined by SE from header
# corresponding to diameters of [1.5,3,5,7,9,11,13,15] arcsec
# assuming 0.262 arcsec pixel scale
PHOT_APERTURES 5.7251911,11.450382,19.083969,26.717558,34.351147,41.984734,49.618320,57.251911
-> photutils aperture photometry on PsfEx image -> 1.0 at radius ~ 13;
total ~ 1.05
One of the largest differences:
z band
Photometric diff 0.0249176025391 PSF size 1.07837 expnum 292604
-> Notice that the difference is largest for *small* PSFs.
Could this be brighter-fatter?
-> Check out the region Aaron pointed to with 0.025 errors
-> Is it possible that this is coming from saturation in the zeropoints
computation (decstat)?!
-> Sky estimation?
-> Is PsfEx using any flags (SATUR) to cut candidates?
-> Look at forced phot residuals? Take model & data slices of a whole
pile of stars in expnum 292604 N4
'''
def star_profiles(ps):
# Run an example CCD, 292604-N4, with fairly large difference vs PS1.
# python -c "from astrometry.util.fits import *; T = merge_tables([fits_table('/project/projectdirs/desiproc/dr3/tractor/244/tractor-244%s.fits' % b) for b in ['2p065','4p065', '7p065']]); T.writeto('tst-cat.fits')"
# python legacypipe/forced_photom_decam.py --save-data tst-data.fits --save-model tst-model.fits 292604 N4 tst-cat.fits tst-phot.fits
# -> tst-{model,data,phot}.fits
datafn = 'tst-data.fits'
modfn = 'tst-model.fits'
photfn = 'tst-phot.fits'
catfn = 'tst-cat.fits'
img = fitsio.read(datafn)
mod = fitsio.read(modfn)
phot = fits_table(photfn)
cat = fits_table(catfn)
print(len(phot), 'forced-photometry results')
margin = 25
phot.cut((phot.x > 0+margin) * (phot.x < 2046-margin) *
(phot.y > 0+margin) * (phot.y < 4096-margin))
print(len(phot), 'in bounds')
cmap = dict([((b,o),i) for i,(b,o) in enumerate(zip(cat.brickname, cat.objid))])
I = np.array([cmap.get((b,o), -1) for b,o in zip(phot.brickname, phot.objid)])
print(np.sum(I >= 0), 'forced-phot matched cat')
phot.type = cat.type[I]
wcs = Sip(datafn)
phot.ra,phot.dec = wcs.pixelxy2radec(phot.x+1, phot.y+1)
phot.cut(np.argsort(phot.flux))
phot.sn = phot.flux * np.sqrt(phot.flux_ivar)
phot.cut(phot.sn > 5)
print(len(phot), 'with S/N > 5')
ps1 = ps1cat(ccdwcs=wcs)
stars = ps1.get_stars()
print(len(stars), 'PS1 sources')
# Now cut to just *stars* with good colors
stars.gicolor = stars.median[:,0] - stars.median[:,2]
keep = (stars.gicolor > 0.4) * (stars.gicolor < 2.7)
stars.cut(keep)
print(len(stars), 'PS1 stars with good colors')
stars.cut(np.minimum(stars.stdev[:,1], stars.stdev[:,2]) < 0.05)
print(len(stars), 'PS1 stars with min stdev(r,i) < 0.05')
I,J,d = match_radec(phot.ra, phot.dec, stars.ra, stars.dec, 1./3600.)
print(len(I), 'matches')
plt.clf()
ha=dict(histtype='step', bins=20, range=(0,100), normed=True)
plt.hist(phot.flux, color='b', **ha)
plt.hist(phot.flux[I], color='r', **ha)
ps.savefig()
plt.clf()
plt.hist(phot.flux * np.sqrt(phot.flux_ivar), bins=100,
range=(-10, 50))
plt.xlabel('Flux S/N')
ps.savefig()
K = np.argsort(phot.flux[I])
I = I[K]
J = J[K]
ix = np.round(phot.x).astype(int)
iy = np.round(phot.y).astype(int)
sz = 10
P = np.flatnonzero(phot.type == 'PSF ')
print(len(P), 'PSFs')
imed = len(P)/2
i1 = int(len(P) * 0.75)
i2 = int(len(P) * 0.25)
N = 401
allmods = []
allimgs = []
for II,tt in [#(I[:len(I)/2], 'faint matches to PS1'),
#(I[len(I)/2:], 'bright matches to PS1'),
#(P[i2: i2+N], '25th pct PSFs'),
#(P[imed: imed+N], 'median PSFs'),
#(P[i1: i1+N], '75th pct PSFs'),
#(P[-25:], 'brightest PSFs'),
(P[i2:imed], '2nd quartile of PSFs'),
(P[imed:i1], '3rd quartile of PSFs'),
#(P[:len(P)/2], 'faint half of PSFs'),
#(P[len(P)/2:], 'bright half of PSFs'),
]:
imgs = []
mods = []
shimgs = []
shmods = []
imgsum = modsum = 0
#plt.clf()
for i in II:
from astrometry.util.util import lanczos_shift_image
dy = phot.y[i] - iy[i]
dx = phot.x[i] - ix[i]
sub = img[iy[i]-sz : iy[i]+sz+1, ix[i]-sz : ix[i]+sz+1]
shimg = lanczos_shift_image(sub, -dx, -dy)
sub = mod[iy[i]-sz : iy[i]+sz+1, ix[i]-sz : ix[i]+sz+1]
shmod = lanczos_shift_image(sub, -dx, -dy)
iyslice = img[iy[i], ix[i]-sz : ix[i]+sz+1]
myslice = mod[iy[i], ix[i]-sz : ix[i]+sz+1]
ixslice = img[iy[i]-sz : iy[i]+sz+1, ix[i]]
mxslice = mod[iy[i]-sz : iy[i]+sz+1, ix[i]]
mx = iyslice.max()
# plt.plot(iyslice/mx, 'b-', alpha=0.1)
# plt.plot(myslice/mx, 'r-', alpha=0.1)
# plt.plot(ixslice/mx, 'b-', alpha=0.1)
# plt.plot(mxslice/mx, 'r-', alpha=0.1)
siyslice = shimg[sz, :]
sixslice = shimg[:, sz]
smyslice = shmod[sz, :]
smxslice = shmod[:, sz]
shimgs.append(siyslice/mx)
shimgs.append(sixslice/mx)
shmods.append(smyslice/mx)
shmods.append(smxslice/mx)
imgs.append(iyslice/mx)
imgs.append(ixslice/mx)
mods.append(myslice/mx)
mods.append(mxslice/mx)
imgsum = imgsum + ixslice + iyslice
modsum = modsum + mxslice + myslice
# plt.ylim(-0.1, 1.1)
# plt.title(tt)
# ps.savefig()
mimg = np.median(np.array(imgs), axis=0)
mmod = np.median(np.array(mods), axis=0)
mshim = np.median(np.array(shimgs), axis=0)
mshmo = np.median(np.array(shmods), axis=0)
allmods.append(mshmo)
allimgs.append(mshim)
plt.clf()
# plt.plot(mimg, 'b-')
# plt.plot(mmod, 'r-')
plt.plot(mshim, 'g-')
plt.plot(mshmo, 'm-')
plt.ylim(-0.1, 1.1)
plt.title(tt + ': median; sums %.3f/%.3f' % (np.sum(mimg), np.sum(mmod)))
ps.savefig()
# plt.clf()
# mx = imgsum.max()
# plt.plot(imgsum/mx, 'b-')
# plt.plot(modsum/mx, 'r-')
# plt.ylim(-0.1, 1.1)
# plt.title(tt + ': sum')
# ps.savefig()
plt.clf()
plt.plot((mimg + 0.01) / (mmod + 0.01), 'k-')
plt.plot((imgsum/mx + 0.01) / (modsum/mx + 0.01), 'g-')
plt.plot((mshim + 0.01) / (mshmo + 0.01), 'm-')
plt.ylabel('(img + 0.01) / (mod + 0.01)')
plt.title(tt)
ps.savefig()
iq2,iq3 = allimgs
mq2,mq3 = allmods
plt.clf()
plt.plot(iq2, 'r-')
plt.plot(mq2, 'm-')
plt.plot(iq3, 'b-')
plt.plot(mq3, 'g-')
plt.title('Q2 vs Q3')
ps.savefig()
def main():
# ps = PlotSequence('pro')
# star_profiles(ps)
# sys.exit(0)
#survey_dir = '/project/projectdirs/desiproc/dr3'
#survey = LegacySurveyData(survey_dir=survey_dir)
survey = LegacySurveyData()
ralo,rahi = 240,245
declo,dechi = 5, 12
ps = PlotSequence('comp')
bands = 'grz'
ccdfn = 'ccds-forced.fits'
if not os.path.exists(ccdfn):
ccds = survey.get_annotated_ccds()
ccds.cut((ccds.ra > ralo) * (ccds.ra < rahi) *
(ccds.dec > declo) * (ccds.dec < dechi))
print(len(ccds), 'CCDs')
ccds.path = np.array([os.path.join(#'dr3',
'forced', ('%08i' % e)[:5], '%08i' % e, 'decam-%08i-%s-forced.fits' % (e, n.strip()))
for e,n in zip(ccds.expnum, ccds.ccdname)])
I, = np.nonzero([os.path.exists(fn) for fn in ccds.path])
print(len(I), 'CCDs with forced photometry')
ccds.cut(I)
#ccds = ccds[:500]
#e,I = np.unique(ccds.expnum, return_index=True)
#print(len(I), 'unique exposures')
#ccds.cut(I)
FF = read_forcedphot_ccds(ccds, survey)
FF.writeto('forced-all-matches.fits')
# - z band -- no trend w/ PS1 mag (brighter-fatter)
ccds.writeto(ccdfn)
ccdfn2 = 'ccds-forced-2.fits'
if not os.path.exists(ccdfn2):
ccds = fits_table(ccdfn)
# Split into brighter/fainter halves
FF = fits_table('forced-all-matches.fits')
print(len(FF), 'forced measurements')
FF.cut(FF.masked == False)
print(len(FF), 'forced measurements not masked')
ccds.brightest_mdiff = np.zeros(len(ccds))
ccds.brightest_mscatter = np.zeros(len(ccds))
ccds.bright_mdiff = np.zeros(len(ccds))
ccds.bright_mscatter = np.zeros(len(ccds))
ccds.faint_mdiff = np.zeros(len(ccds))
ccds.faint_mscatter = np.zeros(len(ccds))
for iccd in range(len(ccds)):
I = np.flatnonzero(FF.iforced == iccd)
if len(I) == 0:
continue
if len(I) < 10:
continue
F = FF[I]
b = np.percentile(F.psmag, 10)
m = np.median(F.psmag)
print(len(F), 'matches for CCD', iccd, 'median mag', m, '10th pct', b)
J = np.flatnonzero(F.psmag < b)
diff = F.mag[J] - F.psmag[J]
ccds.brightest_mdiff[iccd] = np.median(diff)
ccds.brightest_mscatter[iccd] = (np.percentile(diff, 84) -
np.percentile(diff, 16))/2.
J = np.flatnonzero(F.psmag < m)
diff = F.mag[J] - F.psmag[J]
ccds.bright_mdiff[iccd] = np.median(diff)
ccds.bright_mscatter[iccd] = (np.percentile(diff, 84) -
np.percentile(diff, 16))/2.
J = np.flatnonzero(F.psmag > m)
diff = F.mag[J] - F.psmag[J]
ccds.faint_mdiff[iccd] = np.median(diff)
ccds.faint_mscatter[iccd] = (np.percentile(diff, 84) -
np.percentile(diff, 16))/2.
ccds.writeto(ccdfn2)
ccds = fits_table(ccdfn2)
plt.clf()
plt.hist(ccds.nforced, bins=100)
plt.title('nforced')
ps.savefig()
plt.clf()
plt.hist(ccds.nmatched, bins=100)
plt.title('nmatched')
ps.savefig()
#ccds.cut(ccds.nmatched >= 150)
ccds.cut(ccds.nmatched >= 50)
print('Cut to', len(ccds), 'with >50 matched')
ccds.cut(ccds.photometric)
print('Cut to', len(ccds), 'photometric')
neff = 1. / ccds.psfnorm_mean**2
# Narcsec is in arcsec**2
narcsec = neff * ccds.pixscale_mean**2
# to arcsec
narcsec = np.sqrt(narcsec)
# Correction factor to get back to equivalent of Gaussian sigma
narcsec /= (2. * np.sqrt(np.pi))
# Conversion factor to FWHM (2.35)
narcsec *= 2. * np.sqrt(2. * np.log(2.))
ccds.psfsize = narcsec
for band in bands:
I = np.flatnonzero((ccds.filter == band)
* (ccds.photometric) * (ccds.blacklist_ok))
mlo,mhi = -0.01, 0.05
plt.clf()
plt.plot(ccds.ccdzpt[I],
ccds.exptime[I], 'k.', alpha=0.1)
J = np.flatnonzero((ccds.filter == band) * (ccds.photometric == False))
plt.plot(ccds.ccdzpt[J],
ccds.exptime[J], 'r.', alpha=0.1)
plt.xlabel('Zeropoint (mag)')
plt.ylabel('exptime')
plt.title('DR3: EDR region, Forced phot: %s band' % band)
ps.savefig()
plt.clf()
plt.plot(ccds.ccdzpt[I],
np.clip(ccds.mdiff[I], mlo,mhi), 'k.', alpha=0.1)
plt.xlabel('Zeropoint (mag)')
plt.ylabel('DECaLS PSF - PS1 (mag)')
plt.axhline(0, color='k', alpha=0.2)
#plt.axis([0, mxsee, mlo,mhi])
plt.title('DR3: EDR region, Forced phot: %s band' % band)
ps.savefig()
plt.clf()
plt.plot(ccds.ccdzpt[I], ccds.psfsize[I], 'k.', alpha=0.1)
plt.xlabel('Zeropoint (mag)')
plt.ylabel('PSF size (arcsec)')
plt.title('DR3: EDR region, Forced phot: %s band' % band)
ps.savefig()
# plt.clf()
# plt.plot(ccds.avsky[I],
# np.clip(ccds.mdiff[I], mlo,mhi), 'k.', alpha=0.1)
# plt.xlabel('avsky')
# plt.ylabel('DECaLS PSF - PS1 (mag)')
# plt.axhline(0, color='k', alpha=0.2)
# plt.title('DR3: EDR region, Forced phot: %s band' % band)
# ps.savefig()
#
# plt.clf()
# plt.plot(ccds.meansky[I],
# np.clip(ccds.mdiff[I], mlo,mhi), 'k.', alpha=0.1)
# plt.xlabel('meansky')
# plt.ylabel('DECaLS PSF - PS1 (mag)')
# plt.axhline(0, color='k', alpha=0.2)
# plt.title('DR3: EDR region, Forced phot: %s band' % band)
# ps.savefig()
# plt.clf()
# plt.plot(ccds.avsky[I],
# np.clip(ccds.mdiff[I], mlo,mhi), 'k.', alpha=0.1)
# plt.xlabel('avsky')
# plt.ylabel('DECaLS PSF - PS1 (mag)')
# plt.axhline(0, color='k', alpha=0.2)
# plt.title('DR3: EDR region, Forced phot: %s band' % band)
# ps.savefig()
#
# plt.clf()
# plt.plot(ccds.meansky[I],
# np.clip(ccds.mdiff[I], mlo,mhi), 'k.', alpha=0.1)
# plt.xlabel('meansky')
# plt.ylabel('DECaLS PSF - PS1 (mag)')
# plt.axhline(0, color='k', alpha=0.2)
# plt.title('DR3: EDR region, Forced phot: %s band' % band)
# ps.savefig()
plt.clf()
lo,hi = (-0.02, 0.05)
ha = dict(bins=50, histtype='step', range=(lo,hi))
n,b,p1 = plt.hist(ccds.brightest_mdiff[I], color='r', **ha)
n,b,p2 = plt.hist(ccds.bright_mdiff[I], color='g', **ha)
n,b,p3 = plt.hist(ccds.faint_mdiff[I], color='b', **ha)
plt.legend((p1[0],p2[0],p3[0]), ('Brightest 10%', 'Brightest 50%',
'Faintest 50%'))
plt.xlabel('DECaLS PSF - PS1 (mag)')
plt.ylabel('Number of CCDs')
plt.title('DR3: EDR region, Forced phot: %s band' % band)
plt.xlim(lo,hi)
ps.savefig()
for band in bands:
I = np.flatnonzero(ccds.filter == band)
mxsee = 4.
mlo,mhi = -0.01, 0.05
plt.clf()
plt.plot(np.clip(ccds.psfsize[I], 0, mxsee),
np.clip(ccds.mdiff[I], mlo,mhi), 'k.', alpha=0.1)
# for p in [1,2,3]:
# J = np.flatnonzero(ccds.tilepass[I] == p)
# if len(J):
# plt.plot(np.clip(ccds.psfsize[I[J]], 0, mxsee),
# np.clip(ccds.mdiff[I[J]], mlo,mhi), '.', color='rgb'[p-1], alpha=0.2)
#plt.plot(ccds.seeing[I], ccds.mdiff[I], 'b.')
plt.xlabel('PSF size (arcsec)')
plt.ylabel('DECaLS PSF - PS1 (mag)')
plt.axhline(0, color='k', alpha=0.2)
plt.axis([0, mxsee, mlo,mhi])
plt.title('DR3: EDR region, Forced phot: %s band' % band)
ps.savefig()
# Group by exposure
for band in bands:
I = np.flatnonzero((ccds.filter == band)
* (ccds.photometric) * (ccds.blacklist_ok))
E,J = np.unique(ccds.expnum[I], return_index=True)
print(len(E), 'unique exposures in', band)
exps = ccds[I[J]]
print(len(exps), 'unique exposures in', band)
assert(len(np.unique(exps.expnum)) == len(exps))
exps.ddiff = np.zeros(len(exps))
exps.dsize = np.zeros(len(exps))
exps.nccds = np.zeros(len(exps), int)
exps.brightest_ddiff = np.zeros(len(exps))
exps.bright_ddiff = np.zeros(len(exps))
exps.faint_ddiff = np.zeros(len(exps))
for iexp,exp in enumerate(exps):
J = np.flatnonzero(ccds.expnum[I] == exp.expnum)
J = I[J]
print(len(J), 'CCDs in exposure', exp.expnum)
exps.brightest_mdiff[iexp] = np.median(ccds.brightest_mdiff[J])
exps.bright_mdiff[iexp] = np.median(ccds.bright_mdiff[J])
exps.faint_mdiff[iexp] = np.median(ccds.faint_mdiff[J])
exps.brightest_ddiff[iexp] = (
np.percentile(ccds.brightest_mdiff[J], 84) -
np.percentile(ccds.brightest_mdiff[J], 16))/2.
exps.bright_ddiff[iexp] = (
np.percentile(ccds.bright_mdiff[J], 84) -
np.percentile(ccds.bright_mdiff[J], 16))/2.
exps.faint_ddiff[iexp] = (
np.percentile(ccds.faint_mdiff[J], 84) -
np.percentile(ccds.faint_mdiff[J], 16))/2.
exps.mdiff[iexp] = np.median(ccds.mdiff[J])
exps.ddiff[iexp] = (np.percentile(ccds.mdiff[J], 84) - np.percentile(ccds.mdiff[J], 16))/2.
exps.psfsize[iexp] = np.median(ccds.psfsize[J])
exps.dsize[iexp] = (np.percentile(ccds.psfsize[J], 84) - np.percentile(ccds.psfsize[J], 16))/2.
exps.nccds[iexp] = len(J)
mxsee = 4.
mlo,mhi = -0.01, 0.05
exps.cut(exps.nccds >= 10)
plt.clf()
plt.errorbar(np.clip(exps.psfsize, 0, mxsee),
np.clip(exps.mdiff, mlo,mhi), yerr=exps.ddiff,
#xerr=exps.dsize,
fmt='.', color='k')
# plt.errorbar(np.clip(exps.psfsize, 0, mxsee),
# np.clip(exps.brightest_mdiff, mlo,mhi),
# yerr=exps.brightest_ddiff, fmt='r.')
# plt.errorbar(np.clip(exps.psfsize, 0, mxsee),
# np.clip(exps.bright_mdiff, mlo,mhi),
# yerr=exps.bright_ddiff, fmt='g.')
# plt.errorbar(np.clip(exps.psfsize, 0, mxsee),
# np.clip(exps.faint_mdiff, mlo,mhi),
# yerr=exps.faint_ddiff, fmt='b.')
# plt.plot(np.clip(exps.psfsize, 0, mxsee),
# np.clip(exps.brightest_mdiff, mlo,mhi), 'r.')
# plt.plot(np.clip(exps.psfsize, 0, mxsee),
# np.clip(exps.bright_mdiff, mlo,mhi), 'g.')
# plt.plot(np.clip(exps.psfsize, 0, mxsee),
# np.clip(exps.faint_mdiff, mlo,mhi), 'b.')
#plt.plot(ccds.seeing[I], ccds.mdiff[I], 'b.')
plt.xlabel('PSF size (arcsec)')
plt.ylabel('DECaLS PSF - PS1 (mag)')
plt.axhline(0, color='k', alpha=0.2)
plt.axis([0, mxsee, mlo,mhi])
plt.title('DR3: EDR region, Forced phot: %s band' % band)
ps.savefig()
plt.clf()
p1 = plt.plot(np.clip(exps.psfsize, 0, mxsee),
np.clip(exps.brightest_mdiff, mlo,mhi), 'r.', alpha=0.5)
p2 = plt.plot(np.clip(exps.psfsize, 0, mxsee),
np.clip(exps.bright_mdiff, mlo,mhi), 'g.', alpha=0.5)
p3 = plt.plot(np.clip(exps.psfsize, 0, mxsee),
np.clip(exps.faint_mdiff, mlo,mhi), 'b.', alpha=0.5)
plt.legend((p1[0],p2[0],p3[0]), ('Brightest 10%', 'Brightest 50%',
'Faintest 50%'))
plt.xlabel('PSF size (arcsec)')
plt.ylabel('DECaLS PSF - PS1 (mag)')
plt.axhline(0, color='k', alpha=0.2)
plt.axis([0, mxsee, mlo,mhi])
plt.title('DR3: EDR region, Forced phot: %s band' % band)
ps.savefig()
J = np.argsort(-exps.mdiff)
for j in J:
print(' Photometric diff', exps.mdiff[j], 'PSF size', exps.psfsize[j], 'expnum', exps.expnum[j])
sys.exit(0)
def read_forcedphot_ccds(ccds, survey):
ccds.mdiff = np.zeros(len(ccds))
ccds.mscatter = np.zeros(len(ccds))
Nap = 8
ccds.apdiff = np.zeros((len(ccds), Nap))
ccds.apscatter = np.zeros((len(ccds), Nap))
ccds.nforced = np.zeros(len(ccds), np.int16)
ccds.nunmasked = np.zeros(len(ccds), np.int16)
ccds.nmatched = np.zeros(len(ccds), np.int16)
ccds.nps1 = np.zeros(len(ccds), np.int16)
brickcache = {}
FF = []
for iccd,ccd in enumerate(ccds):
print('CCD', iccd, 'of', len(ccds))
F = fits_table(ccd.path)
print(len(F), 'sources in', ccd.path)
ccds.nforced[iccd] = len(F)
# arr, have to match with brick sources to get RA,Dec.
F.ra = np.zeros(len(F))
F.dec = np.zeros(len(F))
F.masked = np.zeros(len(F), bool)
maglo,maghi = 14.,21.
maxdmag = 1.
F.mag = -2.5 * (np.log10(F.flux) - 9)
F.cut((F.flux > 0) * (F.mag > maglo-maxdmag) * (F.mag < maghi+maxdmag))
print(len(F), 'sources between', (maglo-maxdmag), 'and', (maghi+maxdmag), 'mag')
im = survey.get_image_object(ccd)
print('Reading DQ image for', im)
dq = im.read_dq()
H,W = dq.shape
ix = np.clip(np.round(F.x), 0, W-1).astype(int)
iy = np.clip(np.round(F.y), 0, H-1).astype(int)
F.mask = dq[iy,ix]
print(np.sum(F.mask != 0), 'sources are masked')
for brickname in np.unique(F.brickname):
if not brickname in brickcache:
brickcache[brickname] = fits_table(survey.find_file('tractor', brick=brickname))
T = brickcache[brickname]
idtoindex = np.zeros(T.objid.max()+1, int) - 1
idtoindex[T.objid] = np.arange(len(T))
I = np.flatnonzero(F.brickname == brickname)
J = idtoindex[F.objid[I]]
assert(np.all(J >= 0))
F.ra [I] = T.ra [J]
F.dec[I] = T.dec[J]
F.masked[I] = (T.decam_anymask[J,:].max(axis=1) > 0)
#F.cut(F.masked == False)
#print(len(F), 'not masked')
print(np.sum(F.masked), 'masked in ANYMASK')
ccds.nunmasked[iccd] = len(F)
wcs = Tan(*[float(x) for x in [ccd.crval1, ccd.crval2, ccd.crpix1, ccd.crpix2,
ccd.cd1_1, ccd.cd1_2, ccd.cd2_1, ccd.cd2_2,
ccd.width, ccd.height]])
ps1 = ps1cat(ccdwcs=wcs)
stars = ps1.get_stars()
print(len(stars), 'PS1 sources')
ccds.nps1[iccd] = len(stars)
# Now cut to just *stars* with good colors
stars.gicolor = stars.median[:,0] - stars.median[:,2]
keep = (stars.gicolor > 0.4) * (stars.gicolor < 2.7)
stars.cut(keep)
print(len(stars), 'PS1 stars with good colors')
stars.cut(np.minimum(stars.stdev[:,1], stars.stdev[:,2]) < 0.05)
print(len(stars), 'PS1 stars with min stdev(r,i) < 0.05')
I,J,d = match_radec(F.ra, F.dec, stars.ra, stars.dec, 1./3600.)
print(len(I), 'matches')
band = ccd.filter
colorterm = ps1_to_decam(stars.median[J], band)
F.cut(I)
F.psmag = stars.median[J, ps1.ps1band[band]] + colorterm
K = np.flatnonzero((F.psmag > maglo) * (F.psmag < maghi))
print(len(K), 'with mag', maglo, 'to', maghi)
F.cut(K)
K = np.flatnonzero(np.abs(F.mag - F.psmag) < maxdmag)
print(len(K), 'with good mag matches (<', maxdmag, 'mag difference)')
ccds.nmatched[iccd] = len(K)
if len(K) == 0:
continue
F.cut(K)
ccds.mdiff[iccd] = np.median(F.mag - F.psmag)
ccds.mscatter[iccd] = (np.percentile(F.mag - F.psmag, 84) -
np.percentile(F.mag - F.psmag, 16))/2.
for i in range(Nap):
apmag = -2.5 * (np.log10(F.apflux[:, i]) - 9)
ccds.apdiff[iccd,i] = np.median(apmag - F.psmag)
ccds.apscatter[iccd,i] = (np.percentile(apmag - F.psmag, 84) -
np.percentile(apmag - F.psmag, 16))/2.
#F.about()
for c in ['apflux_ivar', 'brickid', 'flux_ivar',
'mjd', 'objid', 'fracflux', 'rchi2', 'x','y']:
F.delete_column(c)
F.expnum = np.zeros(len(F), np.int32) + ccd.expnum
F.ccdname = np.array([ccd.ccdname] * len(F))
F.iforced = np.zeros(len(F), np.int32) + iccd
FF.append(F)
FF = merge_tables(FF)
return FF
bricks = survey.get_bricks_readonly()
bricks = bricks[(bricks.ra > ralo) * (bricks.ra < rahi) *
(bricks.dec > declo) * (bricks.dec < dechi)]
print(len(bricks), 'bricks')
I, = np.nonzero([os.path.exists(survey.find_file('tractor', brick=b.brickname))
for b in bricks])
print(len(I), 'bricks with catalogs')
bricks.cut(I)
for band in bands:
bricks.set('diff_%s' % band, np.zeros(len(bricks), np.float32))
bricks.set('psfsize_%s' % band, np.zeros(len(bricks), np.float32))
diffs = dict([(b,[]) for b in bands])
for ibrick,b in enumerate(bricks):
fn = survey.find_file('tractor', brick=b.brickname)
T = fits_table(fn)
print(len(T), 'sources in', b.brickname)
brickwcs = wcs_for_brick(b)
ps1 = ps1cat(ccdwcs=brickwcs)
stars = ps1.get_stars()
print(len(stars), 'PS1 sources')
# Now cut to just *stars* with good colors
stars.gicolor = stars.median[:,0] - stars.median[:,2]
keep = (stars.gicolor > 0.4) * (stars.gicolor < 2.7)
stars.cut(keep)
print(len(stars), 'PS1 stars with good colors')
I,J,d = match_radec(T.ra, T.dec, stars.ra, stars.dec, 1./3600.)
print(len(I), 'matches')
for band in bands:
bricks.get('psfsize_%s' % band)[ibrick] = np.median(
T.decam_psfsize[:, survey.index_of_band(band)])
colorterm = ps1_to_decam(stars.median[J], band)
psmag = stars.median[J, ps1.ps1band[band]]
psmag += colorterm
decflux = T.decam_flux[I, survey.index_of_band(band)]
decmag = -2.5 * (np.log10(decflux) - 9)
#K = np.flatnonzero((psmag > 14) * (psmag < 24))
#print(len(K), 'with mag 14 to 24')
K = np.flatnonzero((psmag > 14) * (psmag < 21))
print(len(K), 'with mag 14 to 21')
decmag = decmag[K]
psmag = psmag [K]
K = np.flatnonzero(np.abs(decmag - psmag) < 1)
print(len(K), 'with good mag matches (< 1 mag difference)')
decmag = decmag[K]
psmag = psmag [K]
if False and ibrick == 0:
plt.clf()
#plt.plot(psmag, decmag, 'b.')
plt.plot(psmag, decmag - psmag, 'b.')
plt.xlabel('PS1 mag')
plt.xlabel('DECam - PS1 mag')
plt.title('PS1 matches for %s band, brick %s' % (band, b.brickname))
ps.savefig()
mdiff = np.median(decmag - psmag)
diffs[band].append(mdiff)
print('Median difference:', mdiff)
bricks.get('diff_%s' % band)[ibrick] = mdiff
for band in bands:
d = diffs[band]
plt.clf()
plt.hist(d, bins=20, range=(-0.02, 0.02), histtype='step')
plt.xlabel('Median mag difference per brick')
plt.title('DR3 EDR PS1 vs DECaLS: %s band' % band)
ps.savefig()
print('Median differences in', band, 'band:', np.median(d))
if False:
plt.clf()
plt.hist(diffs['g'], bins=20, range=(-0.02, 0.02), histtype='step', color='g')
plt.hist(diffs['r'], bins=20, range=(-0.02, 0.02), histtype='step', color='r')
plt.hist(diffs['z'], bins=20, range=(-0.02, 0.02), histtype='step', color='m')
plt.xlabel('Median mag difference per brick')
plt.title('DR3 EDR PS1 vs DECaLS')
ps.savefig()
rr,dd = np.meshgrid(np.linspace(ralo,rahi, 400), np.linspace(declo,dechi, 400))
I,J,d = match_radec(rr.ravel(), dd.ravel(), bricks.ra, bricks.dec, 0.18, nearest=True)
print(len(I), 'matches')
for band in bands:
plt.clf()
dmag = np.zeros_like(rr) - 1.
dmag.ravel()[I] = bricks.get('diff_%s' % band)[J]
plt.imshow(dmag, interpolation='nearest', origin='lower',
vmin=-0.01, vmax=0.01, cmap='hot',
extent=(ralo,rahi,declo,dechi))
plt.colorbar()
plt.title('DR3 EDR PS1 vs DECaLS: %s band' % band)
plt.xlabel('RA (deg)')
plt.ylabel('Dec (deg)')
plt.axis([ralo,rahi,declo,dechi])
ps.savefig()
plt.clf()
# reuse 'dmag' map...
dmag = np.zeros_like(rr)
dmag.ravel()[I] = bricks.get('psfsize_%s' % band)[J]
plt.imshow(dmag, interpolation='nearest', origin='lower',
cmap='hot', extent=(ralo,rahi,declo,dechi))
plt.colorbar()
plt.title('DR3 EDR: DECaLS PSF size: %s band' % band)
plt.xlabel('RA (deg)')
plt.ylabel('Dec (deg)')
plt.axis([ralo,rahi,declo,dechi])
ps.savefig()
if False:
for band in bands:
plt.clf()
plt.scatter(bricks.ra, bricks.dec, c=bricks.get('diff_%s' % band), vmin=-0.01, vmax=0.01,
edgecolors='face', s=200)
plt.colorbar()
plt.title('DR3 EDR PS1 vs DECaLS: %s band' % band)
plt.xlabel('RA (deg)')
plt.ylabel('Dec (deg)')
plt.axis('scaled')
plt.axis([ralo,rahi,declo,dechi])
ps.savefig()
plt.clf()
plt.plot(bricks.psfsize_g, bricks.diff_g, 'g.')
plt.plot(bricks.psfsize_r, bricks.diff_r, 'r.')
plt.plot(bricks.psfsize_z, bricks.diff_z, 'm.')
plt.xlabel('PSF size (arcsec)')
plt.ylabel('DECaLS PSF - PS1 (mag)')
plt.title('DR3 EDR')
ps.savefig()
if __name__ == '__main__':
main()
| bsd-3-clause |
dallascard/guac | core/evaluation/summarize_results.py | 1 | 2115 | import os
from optparse import OptionParser
import glob
import numpy as np
import pandas as pd
from core.util import defines
from core.util import file_handling as fh
def main():
usage = "%prog exp_dir_test_fold_dir"
parser = OptionParser(usage=usage)
parser.add_option('-t', dest='test_fold', default=0,
help='Test fold; default=%default')
(options, args) = parser.parse_args()
test_fold = options.test_fold
exp_dir = args[0]
results = pd.DataFrame(columns=('masked', 'test', 'valid', 'dir'))
run_dirs = glob.glob(os.path.join(exp_dir, 'bayes*reuse*'))
for i, dir in enumerate(run_dirs):
run_num = int(fh.get_basename_wo_ext(dir).split('_')[-1])
if run_num <= 40 and '1_' not in fh.get_basename_wo_ext(dir):
results_dir = os.path.join(dir, 'results')
test_file = fh.make_filename(results_dir, 'test_macro_f1', 'csv')
valid_file = fh.make_filename(results_dir, 'valid_cv_macro_f1', 'csv')
masked_valid_file = fh.make_filename(results_dir, 'masked_valid_cv_macro_f1', 'csv')
try:
test = pd.read_csv(test_file, header=False, index_col=0)
valid = pd.read_csv(valid_file, header=False, index_col=0)
masked_valid = pd.read_csv(masked_valid_file, header=False, index_col=0)
#results.loc[run_num, 'iteration'] = run_num
results.loc[i, 'masked'] = masked_valid['overall'].mean()
results.loc[i, 'test'] = test['overall'].mean()
results.loc[i, 'valid'] = valid['overall'].mean()
results.loc[i, 'dir'] = fh.get_basename_wo_ext(dir)
except:
continue
results.to_csv(fh.make_filename(exp_dir, 'summary', 'csv'), columns=results.columns)
sorted = results.sort('valid')
print sorted
print "best by masked"
sorted = results.sort('masked')
print sorted.values[-1, :]
print "best by valid"
sorted = results.sort('valid')
print sorted.values[-1, :]
if __name__ == '__main__':
main()
| apache-2.0 |
numenta/htmresearch | projects/union_path_integration/plot_comparison_to_ideal.py | 3 | 4358 | # ----------------------------------------------------------------------
# Numenta Platform for Intelligent Computing (NuPIC)
# Copyright (C) 2018, Numenta, Inc. Unless you have an agreement
# with Numenta, Inc., for a separate license for this software code, the
# following terms and conditions apply:
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero Public License version 3 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU Affero Public License for more details.
#
# You should have received a copy of the GNU Affero Public License
# along with this program. If not, see http://www.gnu.org/licenses.
#
# http://numenta.org/licenses/
# ----------------------------------------------------------------------
"""Plot convergence chart that compares different algorithms."""
import argparse
from collections import defaultdict
import json
import os
import matplotlib.pyplot as plt
import numpy as np
CWD = os.path.dirname(os.path.realpath(__file__))
CHART_DIR = os.path.join(CWD, "charts")
def getCumulativeAccuracy(convergenceFrequencies):
tot = float(sum(convergenceFrequencies.values()))
results = []
cum = 0.0
for step in xrange(1, 41):
cum += convergenceFrequencies.get(str(step), 0)
results.append(cum / tot)
return results
def createChart(inFilename, outFilename, locationModuleWidths, legendPosition):
numSteps = 12
resultsByParams = defaultdict(list)
with open(inFilename, "r") as f:
experiments = json.load(f)
for exp in experiments:
locationModuleWidth = exp[0]["locationModuleWidth"]
resultsByParams[locationModuleWidth].append(
getCumulativeAccuracy(exp[1]["convergence"]))
with open("results/ideal.json", "r") as f:
idealResults = [getCumulativeAccuracy(trial)
for trial in json.load(f)]
with open("results/bof.json", "r") as f:
bofResults = [getCumulativeAccuracy(trial)
for trial in json.load(f)]
plt.figure(figsize=(3.25, 2.5), tight_layout = {"pad": 0})
data = (
[(idealResults, "Ideal Observer", "x--", 10, 1)]
+
[(resultsByParams[locationModuleWidth],
"{}x{} Cells Per Module".format(locationModuleWidth,
locationModuleWidth),
fmt,
None,
0)
for locationModuleWidth, fmt in zip(locationModuleWidths,
["s-", "o-", "^-"])]
+
[(bofResults, "Bag of Features", "d--", None, -1)])
percentiles = [5, 50, 95]
for resultsByTrial, label, fmt, markersize, zorder in data:
x = []
y = []
errBelow = []
errAbove = []
resultsByStep = zip(*resultsByTrial)
for step, results in zip(xrange(numSteps), resultsByStep):
x.append(step + 1)
p1, p2, p3 = np.percentile(results, percentiles)
y.append(p2)
errBelow.append(p2 - p1)
errAbove.append(p3 - p2)
plt.errorbar(x, y, yerr=[errBelow, errAbove], fmt=fmt, label=label,
capsize=2, markersize=markersize, zorder=zorder)
# Formatting
plt.xlabel("Number of Sensations")
plt.ylabel("Cumulative Accuracy")
plt.xticks([(i+1) for i in xrange(numSteps)])
# Remove the errorbars from the legend.
handles, labels = plt.gca().get_legend_handles_labels()
handles = [h[0] for h in handles]
plt.legend(handles, labels, loc="center right", bbox_to_anchor=legendPosition)
outFilePath = os.path.join(CHART_DIR, outFilename)
print "Saving", outFilePath
plt.savefig(outFilePath)
plt.clf()
if __name__ == "__main__":
plt.rc("font",**{"family": "sans-serif",
"sans-serif": ["Arial"],
"size": 8})
parser = argparse.ArgumentParser()
parser.add_argument("--inFile", type=str, required=True)
parser.add_argument("--outFile", type=str, required=True)
parser.add_argument("--locationModuleWidth", type=int, nargs='+',
default=[17, 20, 40])
parser.add_argument("--legendPosition", type=float, nargs=2, default=None)
args = parser.parse_args()
createChart(args.inFile, args.outFile, args.locationModuleWidth, args.legendPosition)
| agpl-3.0 |
benjaminwilson/word2vec-norm-experiments | build_images_coocc_noise.py | 1 | 3502 | """
Given the vectors and word counts resulting from the experiments, build
graphics for the cooccurrence noise variation experiment section of the
article.
"""
import numpy as np
import pandas as pd
import matplotlib; matplotlib.use('Agg') # must be set before importing pyplot
import matplotlib.pyplot as plt
import random
import sys
from parameters import *
from functions import *
random.seed(1) # fix seed so that we can refer to the randomly chosen words in the article body
vectors_syn0_filename = sys.argv[1]
vectors_syn1neg_filename = sys.argv[2]
word_counts_filename = sys.argv[3]
coocc_noise_exp_words_filename = sys.argv[4]
matplotlib.rcParams.update({'font.size': 14})
matplotlib.rcParams.update({'figure.autholayout': True})
coocc_noise_experiment_words = read_words(coocc_noise_exp_words_filename)
def calculate_norms(vecs):
return np.sqrt((vecs ** 2).sum(axis=1))
vectors_syn0 = load_word2vec_binary(vectors_syn0_filename)
norms_syn0 = calculate_norms(vectors_syn0)
vectors_syn1neg = load_word2vec_binary(vectors_syn1neg_filename)
norms_syn1neg = calculate_norms(vectors_syn1neg)
vocab = list(vectors_syn0.index)
counts_modified_corpus = read_word_counts(word_counts_filename)
stats = pd.DataFrame({'occurrences': counts_modified_corpus,
'L2_norm_syn0': norms_syn0,
'L2_norm_syn1neg': norms_syn1neg}).dropna()
# WORD VECTOR LENGTH AS A FUNCTION OF NOISE PROPORTION
# reorder by the left-most value of the corresponding syn0 plot
# i.e. by word vector length of the noiseless token
words = sorted(coocc_noise_experiment_words,
key=lambda word: stats.L2_norm_syn0.loc[build_experiment_token(word, 1)],
reverse=True)
markers = [['o', 's', 'D'][i % 3] for i in range(len(words))]
fig = plt.figure(figsize=(16, 6))
colorcycle = plt.cm.gist_rainbow(np.linspace(0, 1, 5))
xlabel = 'Proportion of occurrences from noise distribution'
ax_syn0 = plt.subplot(131)
ax_syn0.set_xlabel(xlabel)
ax_syn0.set_ylabel('vector length')
ax_syn0.set_color_cycle(colorcycle)
ax_syn0.set_title('syn0', y=1.04)
ax_syn1neg = plt.subplot(132, sharex=ax_syn0)
ax_syn1neg.set_xlabel(xlabel)
ax_syn1neg.set_color_cycle(colorcycle)
ax_syn1neg.set_title('syn1neg', y=1.04)
def plot_for_word(ax, word, series, **kwargs):
outcomes = range(1, coocc_noise_experiment_max_value + 1)
x = [noise_proportion(i, coocc_noise_experiment_max_value) for i in outcomes]
y = series.loc[[build_experiment_token(word, i) for i in outcomes]]
marker = ['o', 's', 'D'][ord(word[0]) % 3]
return ax.plot(x, y, marker=marker, **kwargs)[0]
lines = []
for word in words:
lines.append(plot_for_word(ax_syn0, word, stats.L2_norm_syn0))
for word in words:
plot_for_word(ax_syn1neg, word, stats.L2_norm_syn1neg)
ax_syn0.set_ylim(0, 30)
ax_syn1neg.set_ylim(0, 15)
ax_syn0.set_xlim(0, 1)
_ = fig.legend(lines, words, bbox_to_anchor=(0.76, 0.56), loc='center', fontsize=14, frameon=False)
plt.tight_layout()
plt.savefig('outputs/cooccurrence-noise-graph.eps')
# COSINE SIMILARITY HEATMAP
idxs = []
ticks = []
for word in random.sample(coocc_noise_experiment_words, 4):
tokens = [build_experiment_token(word, i) for i in range(1, coocc_noise_experiment_max_value + 1)]
idxs += tokens
ticks += tokens[:2] + ['. '] * len(tokens[2:-1]) + tokens[-1:]
example_vecs = vectors_syn0.loc[idxs]
cosine_similarity_heatmap(example_vecs, ticks, figsize=(12, 10))
plt.savefig('outputs/cooccurrence-noise-heatmap.eps')
| apache-2.0 |
ZenDevelopmentSystems/scikit-learn | doc/tutorial/text_analytics/skeletons/exercise_01_language_train_model.py | 254 | 2005 | """Build a language detector model
The goal of this exercise is to train a linear classifier on text features
that represent sequences of up to 3 consecutive characters so as to be
recognize natural languages by using the frequencies of short character
sequences as 'fingerprints'.
"""
# Author: Olivier Grisel <[email protected]>
# License: Simplified BSD
import sys
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import Perceptron
from sklearn.pipeline import Pipeline
from sklearn.datasets import load_files
from sklearn.cross_validation import train_test_split
from sklearn import metrics
# The training data folder must be passed as first argument
languages_data_folder = sys.argv[1]
dataset = load_files(languages_data_folder)
# Split the dataset in training and test set:
docs_train, docs_test, y_train, y_test = train_test_split(
dataset.data, dataset.target, test_size=0.5)
# TASK: Build a an vectorizer that splits strings into sequence of 1 to 3
# characters instead of word tokens
# TASK: Build a vectorizer / classifier pipeline using the previous analyzer
# the pipeline instance should stored in a variable named clf
# TASK: Fit the pipeline on the training set
# TASK: Predict the outcome on the testing set in a variable named y_predicted
# Print the classification report
print(metrics.classification_report(y_test, y_predicted,
target_names=dataset.target_names))
# Plot the confusion matrix
cm = metrics.confusion_matrix(y_test, y_predicted)
print(cm)
#import pylab as pl
#pl.matshow(cm, cmap=pl.cm.jet)
#pl.show()
# Predict the result on some short new sentences:
sentences = [
u'This is a language detection test.',
u'Ceci est un test de d\xe9tection de la langue.',
u'Dies ist ein Test, um die Sprache zu erkennen.',
]
predicted = clf.predict(sentences)
for s, p in zip(sentences, predicted):
print(u'The language of "%s" is "%s"' % (s, dataset.target_names[p]))
| bsd-3-clause |
danuker/trading-with-python | lib/interactiveBrokers/histData.py | 76 | 6472 | '''
Created on May 8, 2013
Copyright: Jev Kuznetsov
License: BSD
Module for downloading historic data from IB
'''
import ib
import pandas as pd
from ib.ext.Contract import Contract
from ib.opt import ibConnection, message
import logger as logger
from pandas import DataFrame, Index
import os
import datetime as dt
import time
from time import sleep
from extra import timeFormat, dateFormat
class Downloader(object):
def __init__(self,debug=False):
self._log = logger.getLogger('DLD')
self._log.debug('Initializing data dwonloader. Pandas version={0}, ibpy version:{1}'.format(pd.__version__,ib.version))
self.tws = ibConnection()
self._dataHandler = _HistDataHandler(self.tws)
if debug:
self.tws.registerAll(self._debugHandler)
self.tws.unregister(self._debugHandler,message.HistoricalData)
self._log.debug('Connecting to tws')
self.tws.connect()
self._timeKeeper = TimeKeeper() # keep track of past requests
self._reqId = 1 # current request id
def _debugHandler(self,msg):
print '[debug]', msg
def requestData(self,contract,endDateTime,durationStr='1 D',barSizeSetting='30 secs',whatToShow='TRADES',useRTH=1,formatDate=1):
if isinstance(endDateTime,dt.datetime): # convert to string
endDateTime = endDateTime.strftime(timeFormat)
self._log.debug('Requesting data for %s end time %s.' % (contract.m_symbol,endDateTime))
while self._timeKeeper.nrRequests(timeSpan=600) > 59:
print 'Too many requests done. Waiting... '
time.sleep(10)
self._timeKeeper.addRequest()
self._dataHandler.reset()
self.tws.reqHistoricalData(self._reqId,contract,endDateTime,durationStr,barSizeSetting,whatToShow,useRTH,formatDate)
self._reqId+=1
#wait for data
startTime = time.time()
timeout = 3
while not self._dataHandler.dataReady and (time.time()-startTime < timeout):
sleep(2)
if not self._dataHandler.dataReady:
self._log.error('Data timeout')
print self._dataHandler.data
return self._dataHandler.data
# def getIntradayData(self,contract, dateTuple ):
# ''' get full day data on 1-s interval
# date: a tuple of (yyyy,mm,dd)
# '''
#
# openTime = dt.datetime(*dateTuple)+dt.timedelta(hours=16)
# closeTime = dt.datetime(*dateTuple)+dt.timedelta(hours=22)
#
# timeRange = pd.date_range(openTime,closeTime,freq='30min')
#
# datasets = []
#
# for t in timeRange:
# datasets.append(self.requestData(contract,t.strftime(timeFormat)))
#
# return pd.concat(datasets)
def disconnect(self):
self.tws.disconnect()
class _HistDataHandler(object):
''' handles incoming messages '''
def __init__(self,tws):
self._log = logger.getLogger('DH')
tws.register(self.msgHandler,message.HistoricalData)
self.reset()
def reset(self):
self._log.debug('Resetting data')
self.dataReady = False
self._timestamp = []
self._data = {'open':[],'high':[],'low':[],'close':[],'volume':[],'count':[],'WAP':[]}
def msgHandler(self,msg):
#print '[msg]', msg
if msg.date[:8] == 'finished':
self._log.debug('Data recieved')
self.dataReady = True
return
if len(msg.date) > 8:
self._timestamp.append(dt.datetime.strptime(msg.date,timeFormat))
else:
self._timestamp.append(dt.datetime.strptime(msg.date,dateFormat))
for k in self._data.keys():
self._data[k].append(getattr(msg, k))
@property
def data(self):
''' return downloaded data as a DataFrame '''
df = DataFrame(data=self._data,index=Index(self._timestamp))
return df
class TimeKeeper(object):
'''
class for keeping track of previous requests, to satify the IB requirements
(max 60 requests / 10 min)
each time a requiest is made, a timestamp is added to a txt file in the user dir.
'''
def __init__(self):
self._log = logger.getLogger('TK')
dataDir = os.path.expanduser('~')+'/twpData'
if not os.path.exists(dataDir):
os.mkdir(dataDir)
self._timeFormat = "%Y%m%d %H:%M:%S"
self.dataFile = os.path.normpath(os.path.join(dataDir,'requests.txt'))
# Create file if it's missing
if not os.path.exists(self.dataFile):
open(self.dataFile,'w').close()
self._log.debug('Data file: {0}'.format(self.dataFile))
def addRequest(self):
''' adds a timestamp of current request'''
with open(self.dataFile,'a') as f:
f.write(dt.datetime.now().strftime(self._timeFormat)+'\n')
def nrRequests(self,timeSpan=600):
''' return number of requests in past timespan (s) '''
delta = dt.timedelta(seconds=timeSpan)
now = dt.datetime.now()
requests = 0
with open(self.dataFile,'r') as f:
lines = f.readlines()
for line in lines:
if now-dt.datetime.strptime(line.strip(),self._timeFormat) < delta:
requests+=1
if requests==0: # erase all contents if no requests are relevant
open(self.dataFile,'w').close()
self._log.debug('past requests: {0}'.format(requests))
return requests
if __name__ == '__main__':
from extra import createContract
dl = Downloader(debug=True) # historic data downloader class
contract = createContract('SPY') # create contract using defaults (STK,SMART,USD)
data = dl.requestData(contract,"20141208 16:00:00 EST") # request 30-second data bars up till now
data.to_csv('SPY.csv') # write data to csv
print 'Done' | bsd-3-clause |
khkaminska/scikit-learn | examples/model_selection/grid_search_digits.py | 227 | 2665 | """
============================================================
Parameter estimation using grid search with cross-validation
============================================================
This examples shows how a classifier is optimized by cross-validation,
which is done using the :class:`sklearn.grid_search.GridSearchCV` object
on a development set that comprises only half of the available labeled data.
The performance of the selected hyper-parameters and trained model is
then measured on a dedicated evaluation set that was not used during
the model selection step.
More details on tools available for model selection can be found in the
sections on :ref:`cross_validation` and :ref:`grid_search`.
"""
from __future__ import print_function
from sklearn import datasets
from sklearn.cross_validation import train_test_split
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.svm import SVC
print(__doc__)
# Loading the Digits dataset
digits = datasets.load_digits()
# To apply an classifier on this data, we need to flatten the image, to
# turn the data in a (samples, feature) matrix:
n_samples = len(digits.images)
X = digits.images.reshape((n_samples, -1))
y = digits.target
# Split the dataset in two equal parts
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.5, random_state=0)
# Set the parameters by cross-validation
tuned_parameters = [{'kernel': ['rbf'], 'gamma': [1e-3, 1e-4],
'C': [1, 10, 100, 1000]},
{'kernel': ['linear'], 'C': [1, 10, 100, 1000]}]
scores = ['precision', 'recall']
for score in scores:
print("# Tuning hyper-parameters for %s" % score)
print()
clf = GridSearchCV(SVC(C=1), tuned_parameters, cv=5,
scoring='%s_weighted' % score)
clf.fit(X_train, y_train)
print("Best parameters set found on development set:")
print()
print(clf.best_params_)
print()
print("Grid scores on development set:")
print()
for params, mean_score, scores in clf.grid_scores_:
print("%0.3f (+/-%0.03f) for %r"
% (mean_score, scores.std() * 2, params))
print()
print("Detailed classification report:")
print()
print("The model is trained on the full development set.")
print("The scores are computed on the full evaluation set.")
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred))
print()
# Note the problem is too easy: the hyperparameter plateau is too flat and the
# output model is the same for precision and recall with ties in quality.
| bsd-3-clause |
lmjohns3/py-plot | lmj/plot.py | 1 | 12537 | '''Convenience methods and data for plotting.'''
import contextlib
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.animation as anim
import matplotlib.patches as patch
import numpy as np
import seaborn as sns
from mpl_toolkits.mplot3d import Axes3D
# import many names directly from matplotlib.pyplot.
from matplotlib.pyplot import *
# colors from https://github.com/mbostock/d3/wiki/Ordinal-Scales
COLOR11 = (
'#111111',
'#d62728',
'#1f77b4',
'#2ca02c',
'#9467bd',
'#ff7f0e',
'#e377c2',
'#8c564b',
'#bcbd22',
'#7f7f7f',
'#17becf',
)
# set some default styles for 2d plots.
sns.set(
context='notebook',
style='white',
palette='colorblind',
rc={
'font.sans-serif': [
'Source Sans Pro',
'Helvetica',
'Arial',
'Lucida Grande',
'Bitstream Vera Sans',
],
},
)
def plot(*args, **kwargs):
'''Helper function for making quick line plots.
This helper generates a new axes object and calls `matplotlib.pyplot.plot`.
Positional and keyword arguments are passed directly to
`matplotlib.pyplot.plot`.
Returns
-------
matplotlib.pyplot.Axes :
An axes object containing the plot.
'''
ax = create_axes(spines=True)
ax.plot(*args, **kwargs)
return ax
def scatter(*args, **kwargs):
'''Helper function for making quick scatter plots.
This helper generates a new axes object and calls `matplotlib.pyplot.scatter`.
Positional and keyword arguments are passed directly to
`matplotlib.pyplot.scatter`.
Returns
-------
matplotlib.pyplot.Axes :
An axes object containing the plot.
'''
ax = create_axes(spines=None)
ax.scatter(*args, **kwargs)
return ax
@contextlib.contextmanager
def axes(*args, **kwargs):
'''Produce a 2D plotting axes, for use in a with statement.
Parameters
----------
show_afterwards : bool, optional
If present and False, do not show the plot after the yield.
All other arguments are passed to :func:`create_axes`.
'''
show_afterwards = True
if 'show_afterwards' in kwargs:
show_afterwards = kwargs.pop('show_afterwards')
ax = create_axes(*args, **kwargs)
yield ax
if show_afterwards:
show()
@contextlib.contextmanager
def axes3d(*args, **kwargs):
'''Produce a 3D plotting axes, for use in a with statement.
Parameters
----------
show_afterwards : bool, optional
If present and False, do not show the plot after the yield.
All other arguments are passed to :func:`create_axes`.
'''
show_afterwards = True
if 'show_afterwards' in kwargs:
show_afterwards = kwargs.pop('show_afterwards')
ax = create_axes(*args, projection='3d', spines=None, **kwargs)
ax.w_xaxis.set_pane_color((1, 1, 1, 1))
ax.w_yaxis.set_pane_color((1, 1, 1, 1))
ax.w_zaxis.set_pane_color((1, 1, 1, 1))
yield ax
if show_afterwards:
show()
def create_axes(subplot=111, shape=None, log='', max_xticks=6, max_yticks=6, spines=None, **kwargs):
'''Create an axes object to hold a plot.
Parameters
----------
subplot : int or tuple of ints
This will be passed to `matplotlib.pyplot.subplot()`.
shape : tuple of ints
If this is provided, it will be passed as the argument to
`matplotlib.pyplot.axis()`.
log : str
A string, containing 'x' and/or 'y', indicating the axes that should
have log scales.
max_xticks : int, optional
The maximum number of ticks to position on the x-axis, if any.
max_yticks : int, optional
The maximum number of ticks to position on the y-axis, if any.
spines : any, optional
If given, this argument will be passed to `set_spines`.
Remaining keyword arguments will be passed to `axes()` (if `shape` is not
None) or `subplot()` (otherwise).
Returns
-------
matplotlib.Axes :
A matplotlib.Axes instance.
'''
if shape is not None:
ax = plt.axes(shape, **kwargs)
else:
if not isinstance(subplot, (tuple, list)):
subplot = (subplot, )
ax = plt.subplot(*subplot, **kwargs)
if 'x' in log:
ax.set_xscale('log')
if 'y' in log:
ax.set_yscale('log')
if max_xticks > 0:
plt.locator_params(axis='x', nbins=max_xticks - 1)
if max_yticks > 0:
plt.locator_params(axis='y', nbins=max_yticks - 1)
if spines is not None:
set_spines(ax, spines)
return ax
def set_spines(ax, spines=True, offset=6):
'''Set the spines on the given Axes object.
Arguments
---------
ax : matplotlib.Axes
Set the spines on this Axes object.
spines : any
This parameter provides formatting instructions for the axes in the plot.
It can take many different types of values:
- dictionary: The strings 'left', 'right', 'top', 'bottom' should be
mapped to numeric values---the numeric value specifies the distance
outward from the plot (in points) to show the corresponding spine.
Alternatively, the values in the dictionary can be anything that can
be passed to `matplotlib.Spine#set_position`.
- numeric: All spines will be shown, offset outward by this many points.
- string/list/tuple/set: Spines with names included in the
string/list/tuple/set will be shown at an offset of 6pt. Spines not
named will not be shown.
- falsey: No spines will be shown.
Otherwise, the default spine configuration is used---the bottom and left
spines are shown, offset outward by 6pt, and the top and right spines
are hidden.
offset : numeric
Default offset for spine names mentioned in the `spines` argument.
'''
if spines is True:
spines = 'bottom left'
if spines is False:
spines = ''
if isinstance(spines, (int, float)):
offset = spines
spines = 'top right bottom left'
if isinstance(spines, str):
spines = spines.split()
if isinstance(spines, (list, tuple, set)):
spines = {k: offset for k in spines}
for name, spine in ax.spines.items():
pos = spines.get(name)
if pos is None:
spine.set_visible(False)
elif isinstance(pos, int):
spine.set_position(('outward', pos))
else:
spine.set_position(pos)
if 'bottom' in spines:
ax.xaxis.tick_bottom()
else:
ax.xaxis.set_ticks([])
if 'left' in spines:
ax.yaxis.tick_left()
else:
ax.yaxis.set_ticks([])
def rotate_3d(display,
output=None,
elev=(15, 25),
azim=(10, 30),
jitter=0,
rotations=2,
duration_sec=10,
animation_fps=30,
movie_fps=30,
movie_args=(),
fig=None,
**kwargs):
'''Make a rotating 3D plot animation.
This helper calls out to a function to render a 3D graph, and then animates
a rotation around that content by changing the azimuth and elevation of the
camera. The actual content of the graph will, in this case, be static -- the
animation's only purpose is to make the 3D contents of the graph more clear
by rotating the entire graph around.
Parameters
----------
display : callable
Call this function with no arguments to show the 3D contents of the
graph.
output : str, optional
Save the output in this file, if given. Should be a .mp4 movie filename.
elev : (float, float), optional
Range for the elevation values in the animation. Defaults to (15, 25).
azim : (float, float), optional
Range for the azimuth values in the animation. Defaults to (10, 40).
jitter : float, optional
Add random noise with this standard deviation to the azimuth/elevation
at each time step. Defaults to 0 (no jitter).
rotations : float, optional
Render this many rotations around the azimuth/elevation ellipse.
Defaults to 2. May be fractional as well.
duration_sec : float, optional
Number of seconds for the overall animation. Defaults to 10.
animation_fps : float, optional
Number of frames per second that we should change the perspective and
render a new frame. Defaults to 30.
movie_fps : float, optional
Frames per second in the rendered movie output. Defaults to 30.
movie_args : tuple, optional
Extra arguments to pass to ffmpeg while creating the movie.
fig : dict, optional
Keyword arguments to pass to create the figure for the animation.
Returns
-------
anim : matplotlib animation
An animation containing the rendered frames.
'''
num_frames = duration_sec * animation_fps
fig = plt.figure(**(fig or {}))
ax = create_axes(111, projection='3d', spines=None, **kwargs)
def animate(i):
c_elev = (elev[1] + elev[0]) / 2
c_azim = (azim[1] + azim[0]) / 2
r_elev = (elev[1] - elev[0]) / 2
r_azim = (azim[1] - azim[0]) / 2
t = 2 * np.pi * i * rotations / num_frames
e = c_elev + r_elev * (np.sin(t) + jitter * np.random.randn())
a = c_azim + r_azim * (np.cos(t) + jitter * np.random.randn())
ax.view_init(elev=e, azim=a)
anim = matplotlib.animation.FuncAnimation(
fig, animate, init_func=display(ax), frames=num_frames,
interval=1000 / animation_fps, blit=False)
if output:
anim.save(output, fps=movie_fps, extra_args=movie_args)
return anim
N = 127
t1 = np.linspace(0, 2 * np.pi, N)
t2 = np.linspace(0, np.pi, N)
XGRID = np.outer(np.cos(t1), np.sin(t2)).flatten()
YGRID = np.outer(np.sin(t1), np.sin(t2)).flatten()
ZGRID = np.outer(np.ones_like(t1), np.cos(t2)).flatten()
def eigs(ax, means, vals, vecs, ix=0, iy=1, contours=(1, ), **kwargs):
'''Draw an ellipse at each contour of a set of eigenvectors.
Parameters
----------
ax : Axis3D
Axis to plot on. This must be a 2D axis.
means : ndarray (dims, )
Means for each dimension of the data.
vals : ndarray (feaures, )
Eigenvalues of a dataset.
vecs : ndarray (dims, features)
Eigenvectors of a dataset. Each column is one eigenvector.
ix : int, optional
X-axis index. Defaults to 0.
iy : int, optional
Y-axis index. Defaults to 1.
contours : sequence of float, optional
List of contour lines to draw. Defaults to (1, ), i.e., just draw one
contour at the scale of the eigenvalues.
Returns
-------
surfaces :
A list of the artists that were plotted.
'''
artists = []
for c in contours:
artists.append(patch.Ellipse(
xy=[means[ix], means[iy]],
width=c * vals[ix],
height=c * vals[iy],
angle=np.degrees(np.arctan2(vecs[iy, ix], vecs[ix, ix])),
**kwargs))
[ax.add_artist(art) for art in artists]
return artists
def eigs3d(ax, means, vals, vecs, ix=0, iy=1, iz=2, contours=(1, ), **kwargs):
'''Draw a 3D ellipse at each contour of a set of eigenvectors.
Parameters
----------
ax : Axis3D
Axis to plot on. This must be a 3D axis.
means : ndarray (dims, )
Means for each dimension of the data.
vals : ndarray (features, )
Eigenvalues of a dataset.
vecs : ndarray (dims, features)
Eigenvectors of a dataset. Each column is one eigenvector.
ix : int, optional
X-axis index. Defaults to 0.
iy : int, optional
Y-axis index. Defaults to 1.
iz : int, optional
Z-axis index. Defaults to 2.
contours : sequence of float, optional
List of contour lines to draw. Defaults to (1, ), i.e., just draw one
contour at the scale of the eigenvalues.
Returns
-------
surfaces :
A list of the surfaces that were plotted.
'''
r = np.dot(vecs[[ix, iy, iz], [ix, iy, iz]],
np.vstack([vals[ix] * XGRID, vals[iy] * YGRID, vals[iz] * ZGRID]))
surfaces = []
for c in contours:
surfaces.append(ax.plot_surface(
means[ix] + c * r[ix].reshape((N, N)),
means[iy] + c * r[iy].reshape((N, N)),
means[iz] + c * r[iz].reshape((N, N)), **kwargs))
return surfaces
| mit |
acmaheri/sms-tools | software/models_interface/stft_function.py | 18 | 2785 | # function to call the main analysis/synthesis functions in software/models/stft.py
import numpy as np
import matplotlib.pyplot as plt
import os, sys
from scipy.signal import get_window
sys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)), '../models/'))
import utilFunctions as UF
import stft as STFT
def main(inputFile = '../../sounds/piano.wav', window = 'hamming', M = 1024, N = 1024, H = 512):
"""
analysis/synthesis using the STFT
inputFile: input sound file (monophonic with sampling rate of 44100)
window: analysis window type (choice of rectangular, hanning, hamming, blackman, blackmanharris)
M: analysis window size
N: fft size (power of two, bigger or equal than M)
H: hop size (at least 1/2 of analysis window size to have good overlap-add)
"""
# read input sound (monophonic with sampling rate of 44100)
fs, x = UF.wavread(inputFile)
# compute analysis window
w = get_window(window, M)
# compute the magnitude and phase spectrogram
mX, pX = STFT.stftAnal(x, fs, w, N, H)
# perform the inverse stft
y = STFT.stftSynth(mX, pX, M, H)
# output sound file (monophonic with sampling rate of 44100)
outputFile = 'output_sounds/' + os.path.basename(inputFile)[:-4] + '_stft.wav'
# write the sound resulting from the inverse stft
UF.wavwrite(y, fs, outputFile)
# create figure to plot
plt.figure(figsize=(12, 9))
# frequency range to plot
maxplotfreq = 5000.0
# plot the input sound
plt.subplot(4,1,1)
plt.plot(np.arange(x.size)/float(fs), x)
plt.axis([0, x.size/float(fs), min(x), max(x)])
plt.ylabel('amplitude')
plt.xlabel('time (sec)')
plt.title('input sound: x')
# plot magnitude spectrogram
plt.subplot(4,1,2)
numFrames = int(mX[:,0].size)
frmTime = H*np.arange(numFrames)/float(fs)
binFreq = fs*np.arange(N*maxplotfreq/fs)/N
plt.pcolormesh(frmTime, binFreq, np.transpose(mX[:,:N*maxplotfreq/fs+1]))
plt.xlabel('time (sec)')
plt.ylabel('frequency (Hz)')
plt.title('magnitude spectrogram')
plt.autoscale(tight=True)
# plot the phase spectrogram
plt.subplot(4,1,3)
numFrames = int(pX[:,0].size)
frmTime = H*np.arange(numFrames)/float(fs)
binFreq = fs*np.arange(N*maxplotfreq/fs)/N
plt.pcolormesh(frmTime, binFreq, np.transpose(np.diff(pX[:,:N*maxplotfreq/fs+1],axis=1)))
plt.xlabel('time (sec)')
plt.ylabel('frequency (Hz)')
plt.title('phase spectrogram (derivative)')
plt.autoscale(tight=True)
# plot the output sound
plt.subplot(4,1,4)
plt.plot(np.arange(y.size)/float(fs), y)
plt.axis([0, y.size/float(fs), min(y), max(y)])
plt.ylabel('amplitude')
plt.xlabel('time (sec)')
plt.title('output sound: y')
plt.tight_layout()
plt.show()
if __name__ == "__main__":
main()
| agpl-3.0 |
Garrett-R/scikit-learn | sklearn/decomposition/tests/test_factor_analysis.py | 20 | 3128 | # Author: Christian Osendorfer <[email protected]>
# Alexandre Gramfort <[email protected]>
# Licence: BSD3
import numpy as np
from sklearn.utils.testing import assert_warns
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_greater
from sklearn.utils.testing import assert_less
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils import ConvergenceWarning
from sklearn.decomposition import FactorAnalysis
def test_factor_analysis():
"""Test FactorAnalysis ability to recover the data covariance structure
"""
rng = np.random.RandomState(0)
n_samples, n_features, n_components = 20, 5, 3
# Some random settings for the generative model
W = rng.randn(n_components, n_features)
# latent variable of dim 3, 20 of it
h = rng.randn(n_samples, n_components)
# using gamma to model different noise variance
# per component
noise = rng.gamma(1, size=n_features) * rng.randn(n_samples, n_features)
# generate observations
# wlog, mean is 0
X = np.dot(h, W) + noise
assert_raises(ValueError, FactorAnalysis, svd_method='foo')
fa_fail = FactorAnalysis()
fa_fail.svd_method = 'foo'
assert_raises(ValueError, fa_fail.fit, X)
fas = []
for method in ['randomized', 'lapack']:
fa = FactorAnalysis(n_components=n_components, svd_method=method)
fa.fit(X)
fas.append(fa)
X_t = fa.transform(X)
assert_equal(X_t.shape, (n_samples, n_components))
assert_almost_equal(fa.loglike_[-1], fa.score_samples(X).sum())
assert_almost_equal(fa.score_samples(X).mean(), fa.score(X))
diff = np.all(np.diff(fa.loglike_))
assert_greater(diff, 0., 'Log likelihood dif not increase')
# Sample Covariance
scov = np.cov(X, rowvar=0., bias=1.)
# Model Covariance
mcov = fa.get_covariance()
diff = np.sum(np.abs(scov - mcov)) / W.size
assert_less(diff, 0.1, "Mean absolute difference is %f" % diff)
fa = FactorAnalysis(n_components=n_components,
noise_variance_init=np.ones(n_features))
assert_raises(ValueError, fa.fit, X[:, :2])
f = lambda x, y: np.abs(getattr(x, y)) # sign will not be equal
fa1, fa2 = fas
for attr in ['loglike_', 'components_', 'noise_variance_']:
assert_almost_equal(f(fa1, attr), f(fa2, attr))
fa1.max_iter = 1
fa1.verbose = True
assert_warns(ConvergenceWarning, fa1.fit, X)
assert_warns(DeprecationWarning, FactorAnalysis, verbose=1)
# Test get_covariance and get_precision with n_components == n_features
# with n_components < n_features and with n_components == 0
for n_components in [0, 2, X.shape[1]]:
fa.n_components = n_components
fa.fit(X)
cov = fa.get_covariance()
precision = fa.get_precision()
assert_array_almost_equal(np.dot(cov, precision),
np.eye(X.shape[1]), 12)
| bsd-3-clause |
Jozhogg/iris | lib/iris/symbols.py | 1 | 7746 | # (C) British Crown Copyright 2010 - 2014, Met Office
#
# This file is part of Iris.
#
# Iris is free software: you can redistribute it and/or modify it under
# the terms of the GNU Lesser General Public License as published by the
# Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Iris is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Iris. If not, see <http://www.gnu.org/licenses/>.
"""
Contains symbol definitions for use with :func:`iris.plot.symbols`.
"""
from __future__ import (absolute_import, division, print_function)
import itertools
import math
from matplotlib.patches import PathPatch
from matplotlib.path import Path
import numpy as np
__all__ = ('CLOUD_COVER',)
# The thickness to use for lines, circles, etc.
_THICKNESS = 0.1
def _make_merged_patch(paths):
# Convert a list of Path instances into a single, black PathPatch.
# Prepare empty vertex/code arrays for the merged path.
# The vertex array is initially flat for convenient initialisation,
# but is then reshaped to (N, 2).
total_len = sum(len(path) for path in paths)
all_vertices = np.empty(total_len * 2)
all_codes = np.empty(total_len, dtype=Path.code_type)
# Copy vertex/code details from the source paths
all_segments = itertools.chain(*(path.iter_segments() for path in paths))
i_vertices = 0
i_codes = 0
for vertices, code in all_segments:
n_vertices = len(vertices)
all_vertices[i_vertices:i_vertices + n_vertices] = vertices
i_vertices += n_vertices
n_codes = n_vertices // 2
if code == Path.STOP:
code = Path.MOVETO
all_codes[i_codes:i_codes + n_codes] = code
i_codes += n_codes
all_vertices.shape = (total_len, 2)
return PathPatch(Path(all_vertices, all_codes), facecolor='black',
edgecolor='none')
def _ring_path():
# Returns a Path for a hollow ring.
# The outer radius is 1, the inner radius is 1 - _THICKNESS.
circle = Path.unit_circle()
inner_radius = 1.0 - _THICKNESS
vertices = np.concatenate([circle.vertices[:-1],
circle.vertices[-2::-1] * inner_radius])
codes = np.concatenate([circle.codes[:-1], circle.codes[:-1]])
return Path(vertices, codes)
def _vertical_bar_path():
# Returns a Path for a vertical rectangle, with width _THICKNESS, that will
# nicely overlap the result of _ring_path().
width = _THICKNESS / 2.0
inner_radius = 1.0 - _THICKNESS
vertices = np.array([
[-width, -inner_radius],
[width, -inner_radius],
[width, inner_radius],
[-width, inner_radius],
[-width, inner_radius]
])
codes = np.array([Path.MOVETO, Path.LINETO, Path.LINETO, Path.LINETO,
Path.CLOSEPOLY])
return Path(vertices, codes)
def _slot_path():
# Returns a Path for a filled unit circle with a vertical rectangle
# removed.
circle = Path.unit_circle()
vertical_bar = _vertical_bar_path()
vertices = np.concatenate([circle.vertices[:-1],
vertical_bar.vertices[-2::-1]])
codes = np.concatenate([circle.codes[:-1], vertical_bar.codes[:-1]])
return Path(vertices, codes)
def _left_bar_path():
# Returns a Path for the left-hand side of a horizontal rectangle, with
# height _THICKNESS, that will nicely overlap the result of _ring_path().
inner_radius = 1.0 - _THICKNESS
height = _THICKNESS / 2.0
vertices = np.array([
[-inner_radius, -height],
[0, -height],
[0, height],
[-inner_radius, height],
[-inner_radius, height]
])
codes = np.array([Path.MOVETO, Path.LINETO, Path.LINETO, Path.LINETO,
Path.CLOSEPOLY])
return Path(vertices, codes)
def _slash_path():
# Returns a Path for diagonal, bottom-left to top-right rectangle, with
# width _THICKNESS, that will nicely overlap the result of _ring_path().
half_width = _THICKNESS / 2.0
central_radius = 1.0 - half_width
cos45 = math.cos(math.radians(45))
end_point_offset = cos45 * central_radius
half_width_offset = cos45 * half_width
vertices = np.array([
[-end_point_offset - half_width_offset,
-end_point_offset + half_width_offset],
[-end_point_offset + half_width_offset,
-end_point_offset - half_width_offset],
[end_point_offset + half_width_offset,
end_point_offset - half_width_offset],
[end_point_offset - half_width_offset,
end_point_offset + half_width_offset],
[-end_point_offset - half_width_offset,
-end_point_offset + half_width_offset]
])
codes = np.array([Path.MOVETO, Path.LINETO, Path.LINETO, Path.LINETO,
Path.CLOSEPOLY])
return Path(vertices, codes)
def _backslash_path():
# Returns a Path for diagonal, top-left to bottom-right rectangle, with
# width _THICKNESS, that will nicely overlap the result of _ring_path().
half_width = _THICKNESS / 2.0
central_radius = 1.0 - half_width
cos45 = math.cos(math.radians(45))
end_point_offset = cos45 * central_radius
half_width_offset = cos45 * half_width
vertices = np.array([
[-end_point_offset - half_width_offset,
end_point_offset - half_width_offset],
[end_point_offset - half_width_offset,
-end_point_offset - half_width_offset],
[end_point_offset + half_width_offset,
-end_point_offset + half_width_offset],
[-end_point_offset + half_width_offset,
end_point_offset + half_width_offset],
[-end_point_offset - half_width_offset,
end_point_offset - half_width_offset]
])
codes = np.array([Path.MOVETO, Path.LINETO, Path.LINETO, Path.LINETO,
Path.CLOSEPOLY])
return Path(vertices, codes)
def _wedge_fix(wedge_path):
'''
Fixes the problem with Path.wedge where it doesn't initialise the first,
and last two vertices.
This fix should not have any side-effects once Path.wedge has been fixed,
but will then be redundant and should be removed.
This is fixed in MPL v1.3, raising a RuntimeError. A check is performed to
allow for backward compatibility with MPL v1.2.x.
'''
if wedge_path.vertices.flags.writeable:
wedge_path.vertices[0] = 0
wedge_path.vertices[-2:] = 0
return wedge_path
CLOUD_COVER = {
0: [_ring_path()],
1: [_ring_path(), _vertical_bar_path()],
2: [_ring_path(), _wedge_fix(Path.wedge(0, 90))],
3: [_ring_path(), _wedge_fix(Path.wedge(0, 90)), _vertical_bar_path()],
4: [_ring_path(), Path.unit_circle_righthalf()],
5: [_ring_path(), Path.unit_circle_righthalf(), _left_bar_path()],
6: [_ring_path(), _wedge_fix(Path.wedge(-180, 90))],
7: [_slot_path()],
8: [Path.unit_circle()],
9: [_ring_path(), _slash_path(), _backslash_path()],
}
"""
A dictionary mapping WMO cloud cover codes to their corresponding symbol.
See http://www.wmo.int/pages/prog/www/DPFS/documents/485_Vol_I_en_colour.pdf
Part II, Appendix II.4, Graphical Representation of Data, Analyses
and Forecasts
"""
def _convert_paths_to_patches():
# Convert the symbols defined as lists-of-paths into patches.
for code, symbol in CLOUD_COVER.iteritems():
CLOUD_COVER[code] = _make_merged_patch(symbol)
_convert_paths_to_patches()
| lgpl-3.0 |
arahuja/scikit-learn | sklearn/externals/joblib/__init__.py | 35 | 4382 | """ Joblib is a set of tools to provide **lightweight pipelining in
Python**. In particular, joblib offers:
1. transparent disk-caching of the output values and lazy re-evaluation
(memoize pattern)
2. easy simple parallel computing
3. logging and tracing of the execution
Joblib is optimized to be **fast** and **robust** in particular on large
data and has specific optimizations for `numpy` arrays. It is
**BSD-licensed**.
============================== ============================================
**User documentation**: http://packages.python.org/joblib
**Download packages**: http://pypi.python.org/pypi/joblib#downloads
**Source code**: http://github.com/joblib/joblib
**Report issues**: http://github.com/joblib/joblib/issues
============================== ============================================
Vision
--------
The vision is to provide tools to easily achieve better performance and
reproducibility when working with long running jobs.
* **Avoid computing twice the same thing**: code is rerun over an
over, for instance when prototyping computational-heavy jobs (as in
scientific development), but hand-crafted solution to alleviate this
issue is error-prone and often leads to unreproducible results
* **Persist to disk transparently**: persisting in an efficient way
arbitrary objects containing large data is hard. Using
joblib's caching mechanism avoids hand-written persistence and
implicitly links the file on disk to the execution context of
the original Python object. As a result, joblib's persistence is
good for resuming an application status or computational job, eg
after a crash.
Joblib strives to address these problems while **leaving your code and
your flow control as unmodified as possible** (no framework, no new
paradigms).
Main features
------------------
1) **Transparent and fast disk-caching of output value:** a memoize or
make-like functionality for Python functions that works well for
arbitrary Python objects, including very large numpy arrays. Separate
persistence and flow-execution logic from domain logic or algorithmic
code by writing the operations as a set of steps with well-defined
inputs and outputs: Python functions. Joblib can save their
computation to disk and rerun it only if necessary::
>>> import numpy as np
>>> from sklearn.externals.joblib import Memory
>>> mem = Memory(cachedir='/tmp/joblib')
>>> import numpy as np
>>> a = np.vander(np.arange(3)).astype(np.float)
>>> square = mem.cache(np.square)
>>> b = square(a) # doctest: +ELLIPSIS
________________________________________________________________________________
[Memory] Calling square...
square(array([[ 0., 0., 1.],
[ 1., 1., 1.],
[ 4., 2., 1.]]))
___________________________________________________________square - 0...s, 0.0min
>>> c = square(a)
>>> # The above call did not trigger an evaluation
2) **Embarrassingly parallel helper:** to make is easy to write readable
parallel code and debug it quickly::
>>> from sklearn.externals.joblib import Parallel, delayed
>>> from math import sqrt
>>> Parallel(n_jobs=1)(delayed(sqrt)(i**2) for i in range(10))
[0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0]
3) **Logging/tracing:** The different functionalities will
progressively acquire better logging mechanism to help track what
has been ran, and capture I/O easily. In addition, Joblib will
provide a few I/O primitives, to easily define define logging and
display streams, and provide a way of compiling a report.
We want to be able to quickly inspect what has been run.
4) **Fast compressed Persistence**: a replacement for pickle to work
efficiently on Python objects containing large data (
*joblib.dump* & *joblib.load* ).
..
>>> import shutil ; shutil.rmtree('/tmp/joblib/')
"""
__version__ = '0.8.4'
from .memory import Memory, MemorizedResult
from .logger import PrintTime
from .logger import Logger
from .hashing import hash
from .numpy_pickle import dump
from .numpy_pickle import load
from .parallel import Parallel
from .parallel import delayed
from .parallel import cpu_count
| bsd-3-clause |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.