code
stringlengths 26
870k
| docstring
stringlengths 1
65.6k
| func_name
stringlengths 1
194
| language
stringclasses 1
value | repo
stringlengths 8
68
| path
stringlengths 5
194
| url
stringlengths 46
254
| license
stringclasses 4
values |
---|---|---|---|---|---|---|---|
def compute_membership_strengths(
knn_indices,
knn_dists,
sigmas,
rhos,
return_dists=False,
bipartite=False,
):
"""Construct the membership strength data for the 1-skeleton of each local
fuzzy simplicial set -- this is formed as a sparse matrix where each row is
a local fuzzy simplicial set, with a membership strength for the
1-simplex to each other data point.
Parameters
----------
knn_indices: array of shape (n_samples, n_neighbors)
The indices on the ``n_neighbors`` closest points in the dataset.
knn_dists: array of shape (n_samples, n_neighbors)
The distances to the ``n_neighbors`` closest points in the dataset.
sigmas: array of shape(n_samples)
The normalization factor derived from the metric tensor approximation.
rhos: array of shape(n_samples)
The local connectivity adjustment.
return_dists: bool (optional, default False)
Whether to return the pairwise distance associated with each edge.
bipartite: bool (optional, default False)
Does the nearest neighbour set represent a bipartite graph? That is, are the
nearest neighbour indices from the same point set as the row indices?
Returns
-------
rows: array of shape (n_samples * n_neighbors)
Row data for the resulting sparse matrix (coo format)
cols: array of shape (n_samples * n_neighbors)
Column data for the resulting sparse matrix (coo format)
vals: array of shape (n_samples * n_neighbors)
Entries for the resulting sparse matrix (coo format)
dists: array of shape (n_samples * n_neighbors)
Distance associated with each entry in the resulting sparse matrix
"""
n_samples = knn_indices.shape[0]
n_neighbors = knn_indices.shape[1]
rows = np.zeros(knn_indices.size, dtype=np.int32)
cols = np.zeros(knn_indices.size, dtype=np.int32)
vals = np.zeros(knn_indices.size, dtype=np.float32)
if return_dists:
dists = np.zeros(knn_indices.size, dtype=np.float32)
else:
dists = None
for i in range(n_samples):
for j in range(n_neighbors):
if knn_indices[i, j] == -1:
continue # We didn't get the full knn for i
# If applied to an adjacency matrix points shouldn't be similar to themselves.
# If applied to an incidence matrix (or bipartite) then the row and column indices are different.
if (bipartite == False) & (knn_indices[i, j] == i):
val = 0.0
elif knn_dists[i, j] - rhos[i] <= 0.0 or sigmas[i] == 0.0:
val = 1.0
else:
val = np.exp(-((knn_dists[i, j] - rhos[i]) / (sigmas[i])))
rows[i * n_neighbors + j] = i
cols[i * n_neighbors + j] = knn_indices[i, j]
vals[i * n_neighbors + j] = val
if return_dists:
dists[i * n_neighbors + j] = knn_dists[i, j]
return rows, cols, vals, dists | Construct the membership strength data for the 1-skeleton of each local
fuzzy simplicial set -- this is formed as a sparse matrix where each row is
a local fuzzy simplicial set, with a membership strength for the
1-simplex to each other data point.
Parameters
----------
knn_indices: array of shape (n_samples, n_neighbors)
The indices on the ``n_neighbors`` closest points in the dataset.
knn_dists: array of shape (n_samples, n_neighbors)
The distances to the ``n_neighbors`` closest points in the dataset.
sigmas: array of shape(n_samples)
The normalization factor derived from the metric tensor approximation.
rhos: array of shape(n_samples)
The local connectivity adjustment.
return_dists: bool (optional, default False)
Whether to return the pairwise distance associated with each edge.
bipartite: bool (optional, default False)
Does the nearest neighbour set represent a bipartite graph? That is, are the
nearest neighbour indices from the same point set as the row indices?
Returns
-------
rows: array of shape (n_samples * n_neighbors)
Row data for the resulting sparse matrix (coo format)
cols: array of shape (n_samples * n_neighbors)
Column data for the resulting sparse matrix (coo format)
vals: array of shape (n_samples * n_neighbors)
Entries for the resulting sparse matrix (coo format)
dists: array of shape (n_samples * n_neighbors)
Distance associated with each entry in the resulting sparse matrix | compute_membership_strengths | python | lmcinnes/umap | umap/umap_.py | https://github.com/lmcinnes/umap/blob/master/umap/umap_.py | BSD-3-Clause |
def fuzzy_simplicial_set(
X,
n_neighbors,
random_state,
metric,
metric_kwds={},
knn_indices=None,
knn_dists=None,
angular=False,
set_op_mix_ratio=1.0,
local_connectivity=1.0,
apply_set_operations=True,
verbose=False,
return_dists=None,
):
"""Given a set of data X, a neighborhood size, and a measure of distance
compute the fuzzy simplicial set (here represented as a fuzzy graph in
the form of a sparse matrix) associated to the data. This is done by
locally approximating geodesic distance at each point, creating a fuzzy
simplicial set for each such point, and then combining all the local
fuzzy simplicial sets into a global one via a fuzzy union.
Parameters
----------
X: array of shape (n_samples, n_features)
The data to be modelled as a fuzzy simplicial set.
n_neighbors: int
The number of neighbors to use to approximate geodesic distance.
Larger numbers induce more global estimates of the manifold that can
miss finer detail, while smaller values will focus on fine manifold
structure to the detriment of the larger picture.
random_state: numpy RandomState or equivalent
A state capable being used as a numpy random state.
metric: string or function (optional, default 'euclidean')
The metric to use to compute distances in high dimensional space.
If a string is passed it must match a valid predefined metric. If
a general metric is required a function that takes two 1d arrays and
returns a float can be provided. For performance purposes it is
required that this be a numba jit'd function. Valid string metrics
include:
* euclidean (or l2)
* manhattan (or l1)
* cityblock
* braycurtis
* canberra
* chebyshev
* correlation
* cosine
* dice
* hamming
* jaccard
* kulsinski
* ll_dirichlet
* mahalanobis
* matching
* minkowski
* rogerstanimoto
* russellrao
* seuclidean
* sokalmichener
* sokalsneath
* sqeuclidean
* yule
* wminkowski
Metrics that take arguments (such as minkowski, mahalanobis etc.)
can have arguments passed via the metric_kwds dictionary. At this
time care must be taken and dictionary elements must be ordered
appropriately; this will hopefully be fixed in the future.
metric_kwds: dict (optional, default {})
Arguments to pass on to the metric, such as the ``p`` value for
Minkowski distance.
knn_indices: array of shape (n_samples, n_neighbors) (optional)
If the k-nearest neighbors of each point has already been calculated
you can pass them in here to save computation time. This should be
an array with the indices of the k-nearest neighbors as a row for
each data point.
knn_dists: array of shape (n_samples, n_neighbors) (optional)
If the k-nearest neighbors of each point has already been calculated
you can pass them in here to save computation time. This should be
an array with the distances of the k-nearest neighbors as a row for
each data point.
angular: bool (optional, default False)
Whether to use angular/cosine distance for the random projection
forest for seeding NN-descent to determine approximate nearest
neighbors.
set_op_mix_ratio: float (optional, default 1.0)
Interpolate between (fuzzy) union and intersection as the set operation
used to combine local fuzzy simplicial sets to obtain a global fuzzy
simplicial sets. Both fuzzy set operations use the product t-norm.
The value of this parameter should be between 0.0 and 1.0; a value of
1.0 will use a pure fuzzy union, while 0.0 will use a pure fuzzy
intersection.
local_connectivity: int (optional, default 1)
The local connectivity required -- i.e. the number of nearest
neighbors that should be assumed to be connected at a local level.
The higher this value the more connected the manifold becomes
locally. In practice this should be not more than the local intrinsic
dimension of the manifold.
verbose: bool (optional, default False)
Whether to report information on the current progress of the algorithm.
return_dists: bool or None (optional, default None)
Whether to return the pairwise distance associated with each edge.
Returns
-------
fuzzy_simplicial_set: coo_matrix
A fuzzy simplicial set represented as a sparse matrix. The (i,
j) entry of the matrix represents the membership strength of the
1-simplex between the ith and jth sample points.
"""
if knn_indices is None or knn_dists is None:
knn_indices, knn_dists, _ = nearest_neighbors(
X,
n_neighbors,
metric,
metric_kwds,
angular,
random_state,
verbose=verbose,
)
knn_dists = knn_dists.astype(np.float32)
sigmas, rhos = smooth_knn_dist(
knn_dists,
float(n_neighbors),
local_connectivity=float(local_connectivity),
)
rows, cols, vals, dists = compute_membership_strengths(
knn_indices, knn_dists, sigmas, rhos, return_dists
)
result = scipy.sparse.coo_matrix(
(vals, (rows, cols)), shape=(X.shape[0], X.shape[0])
)
result.eliminate_zeros()
if apply_set_operations:
transpose = result.transpose()
prod_matrix = result.multiply(transpose)
result = (
set_op_mix_ratio * (result + transpose - prod_matrix)
+ (1.0 - set_op_mix_ratio) * prod_matrix
)
result.eliminate_zeros()
if return_dists is None:
return result, sigmas, rhos
else:
if return_dists:
dmat = scipy.sparse.coo_matrix(
(dists, (rows, cols)), shape=(X.shape[0], X.shape[0])
)
dists = dmat.maximum(dmat.transpose()).todok()
else:
dists = None
return result, sigmas, rhos, dists | Given a set of data X, a neighborhood size, and a measure of distance
compute the fuzzy simplicial set (here represented as a fuzzy graph in
the form of a sparse matrix) associated to the data. This is done by
locally approximating geodesic distance at each point, creating a fuzzy
simplicial set for each such point, and then combining all the local
fuzzy simplicial sets into a global one via a fuzzy union.
Parameters
----------
X: array of shape (n_samples, n_features)
The data to be modelled as a fuzzy simplicial set.
n_neighbors: int
The number of neighbors to use to approximate geodesic distance.
Larger numbers induce more global estimates of the manifold that can
miss finer detail, while smaller values will focus on fine manifold
structure to the detriment of the larger picture.
random_state: numpy RandomState or equivalent
A state capable being used as a numpy random state.
metric: string or function (optional, default 'euclidean')
The metric to use to compute distances in high dimensional space.
If a string is passed it must match a valid predefined metric. If
a general metric is required a function that takes two 1d arrays and
returns a float can be provided. For performance purposes it is
required that this be a numba jit'd function. Valid string metrics
include:
* euclidean (or l2)
* manhattan (or l1)
* cityblock
* braycurtis
* canberra
* chebyshev
* correlation
* cosine
* dice
* hamming
* jaccard
* kulsinski
* ll_dirichlet
* mahalanobis
* matching
* minkowski
* rogerstanimoto
* russellrao
* seuclidean
* sokalmichener
* sokalsneath
* sqeuclidean
* yule
* wminkowski
Metrics that take arguments (such as minkowski, mahalanobis etc.)
can have arguments passed via the metric_kwds dictionary. At this
time care must be taken and dictionary elements must be ordered
appropriately; this will hopefully be fixed in the future.
metric_kwds: dict (optional, default {})
Arguments to pass on to the metric, such as the ``p`` value for
Minkowski distance.
knn_indices: array of shape (n_samples, n_neighbors) (optional)
If the k-nearest neighbors of each point has already been calculated
you can pass them in here to save computation time. This should be
an array with the indices of the k-nearest neighbors as a row for
each data point.
knn_dists: array of shape (n_samples, n_neighbors) (optional)
If the k-nearest neighbors of each point has already been calculated
you can pass them in here to save computation time. This should be
an array with the distances of the k-nearest neighbors as a row for
each data point.
angular: bool (optional, default False)
Whether to use angular/cosine distance for the random projection
forest for seeding NN-descent to determine approximate nearest
neighbors.
set_op_mix_ratio: float (optional, default 1.0)
Interpolate between (fuzzy) union and intersection as the set operation
used to combine local fuzzy simplicial sets to obtain a global fuzzy
simplicial sets. Both fuzzy set operations use the product t-norm.
The value of this parameter should be between 0.0 and 1.0; a value of
1.0 will use a pure fuzzy union, while 0.0 will use a pure fuzzy
intersection.
local_connectivity: int (optional, default 1)
The local connectivity required -- i.e. the number of nearest
neighbors that should be assumed to be connected at a local level.
The higher this value the more connected the manifold becomes
locally. In practice this should be not more than the local intrinsic
dimension of the manifold.
verbose: bool (optional, default False)
Whether to report information on the current progress of the algorithm.
return_dists: bool or None (optional, default None)
Whether to return the pairwise distance associated with each edge.
Returns
-------
fuzzy_simplicial_set: coo_matrix
A fuzzy simplicial set represented as a sparse matrix. The (i,
j) entry of the matrix represents the membership strength of the
1-simplex between the ith and jth sample points. | fuzzy_simplicial_set | python | lmcinnes/umap | umap/umap_.py | https://github.com/lmcinnes/umap/blob/master/umap/umap_.py | BSD-3-Clause |
def fast_intersection(rows, cols, values, target, unknown_dist=1.0, far_dist=5.0):
"""Under the assumption of categorical distance for the intersecting
simplicial set perform a fast intersection.
Parameters
----------
rows: array
An array of the row of each non-zero in the sparse matrix
representation.
cols: array
An array of the column of each non-zero in the sparse matrix
representation.
values: array
An array of the value of each non-zero in the sparse matrix
representation.
target: array of shape (n_samples)
The categorical labels to use in the intersection.
unknown_dist: float (optional, default 1.0)
The distance an unknown label (-1) is assumed to be from any point.
far_dist float (optional, default 5.0)
The distance between unmatched labels.
Returns
-------
None
"""
for nz in range(rows.shape[0]):
i = rows[nz]
j = cols[nz]
if (target[i] == -1) or (target[j] == -1):
values[nz] *= np.exp(-unknown_dist)
elif target[i] != target[j]:
values[nz] *= np.exp(-far_dist)
return | Under the assumption of categorical distance for the intersecting
simplicial set perform a fast intersection.
Parameters
----------
rows: array
An array of the row of each non-zero in the sparse matrix
representation.
cols: array
An array of the column of each non-zero in the sparse matrix
representation.
values: array
An array of the value of each non-zero in the sparse matrix
representation.
target: array of shape (n_samples)
The categorical labels to use in the intersection.
unknown_dist: float (optional, default 1.0)
The distance an unknown label (-1) is assumed to be from any point.
far_dist float (optional, default 5.0)
The distance between unmatched labels.
Returns
-------
None | fast_intersection | python | lmcinnes/umap | umap/umap_.py | https://github.com/lmcinnes/umap/blob/master/umap/umap_.py | BSD-3-Clause |
def fast_metric_intersection(
rows, cols, values, discrete_space, metric, metric_args, scale
):
"""Under the assumption of categorical distance for the intersecting
simplicial set perform a fast intersection.
Parameters
----------
rows: array
An array of the row of each non-zero in the sparse matrix
representation.
cols: array
An array of the column of each non-zero in the sparse matrix
representation.
values: array of shape
An array of the values of each non-zero in the sparse matrix
representation.
discrete_space: array of shape (n_samples, n_features)
The vectors of categorical labels to use in the intersection.
metric: numba function
The function used to calculate distance over the target array.
scale: float
A scaling to apply to the metric.
Returns
-------
None
"""
for nz in range(rows.shape[0]):
i = rows[nz]
j = cols[nz]
dist = metric(discrete_space[i], discrete_space[j], *metric_args)
values[nz] *= np.exp(-(scale * dist))
return | Under the assumption of categorical distance for the intersecting
simplicial set perform a fast intersection.
Parameters
----------
rows: array
An array of the row of each non-zero in the sparse matrix
representation.
cols: array
An array of the column of each non-zero in the sparse matrix
representation.
values: array of shape
An array of the values of each non-zero in the sparse matrix
representation.
discrete_space: array of shape (n_samples, n_features)
The vectors of categorical labels to use in the intersection.
metric: numba function
The function used to calculate distance over the target array.
scale: float
A scaling to apply to the metric.
Returns
-------
None | fast_metric_intersection | python | lmcinnes/umap | umap/umap_.py | https://github.com/lmcinnes/umap/blob/master/umap/umap_.py | BSD-3-Clause |
def reset_local_connectivity(simplicial_set, reset_local_metric=False):
"""Reset the local connectivity requirement -- each data sample should
have complete confidence in at least one 1-simplex in the simplicial set.
We can enforce this by locally rescaling confidences, and then remerging the
different local simplicial sets together.
Parameters
----------
simplicial_set: sparse matrix
The simplicial set for which to recalculate with respect to local
connectivity.
Returns
-------
simplicial_set: sparse_matrix
The recalculated simplicial set, now with the local connectivity
assumption restored.
"""
simplicial_set = normalize(simplicial_set, norm="max")
if reset_local_metric:
simplicial_set = simplicial_set.tocsr()
reset_local_metrics(simplicial_set.indptr, simplicial_set.data)
simplicial_set = simplicial_set.tocoo()
transpose = simplicial_set.transpose()
prod_matrix = simplicial_set.multiply(transpose)
simplicial_set = simplicial_set + transpose - prod_matrix
simplicial_set.eliminate_zeros()
return simplicial_set | Reset the local connectivity requirement -- each data sample should
have complete confidence in at least one 1-simplex in the simplicial set.
We can enforce this by locally rescaling confidences, and then remerging the
different local simplicial sets together.
Parameters
----------
simplicial_set: sparse matrix
The simplicial set for which to recalculate with respect to local
connectivity.
Returns
-------
simplicial_set: sparse_matrix
The recalculated simplicial set, now with the local connectivity
assumption restored. | reset_local_connectivity | python | lmcinnes/umap | umap/umap_.py | https://github.com/lmcinnes/umap/blob/master/umap/umap_.py | BSD-3-Clause |
def discrete_metric_simplicial_set_intersection(
simplicial_set,
discrete_space,
unknown_dist=1.0,
far_dist=5.0,
metric=None,
metric_kws={},
metric_scale=1.0,
):
"""Combine a fuzzy simplicial set with another fuzzy simplicial set
generated from discrete metric data using discrete distances. The target
data is assumed to be categorical label data (a vector of labels),
and this will update the fuzzy simplicial set to respect that label data.
TODO: optional category cardinality based weighting of distance
Parameters
----------
simplicial_set: sparse matrix
The input fuzzy simplicial set.
discrete_space: array of shape (n_samples)
The categorical labels to use in the intersection.
unknown_dist: float (optional, default 1.0)
The distance an unknown label (-1) is assumed to be from any point.
far_dist: float (optional, default 5.0)
The distance between unmatched labels.
metric: str (optional, default None)
If not None, then use this metric to determine the
distance between values.
metric_scale: float (optional, default 1.0)
If using a custom metric scale the distance values by
this value -- this controls the weighting of the
intersection. Larger values weight more toward target.
Returns
-------
simplicial_set: sparse matrix
The resulting intersected fuzzy simplicial set.
"""
simplicial_set = simplicial_set.tocoo()
if metric is not None:
# We presume target is now a 2d array, with each row being a
# vector of target info
if metric in dist.named_distances:
metric_func = dist.named_distances[metric]
else:
raise ValueError("Discrete intersection metric is not recognized")
fast_metric_intersection(
simplicial_set.row,
simplicial_set.col,
simplicial_set.data,
discrete_space,
metric_func,
tuple(metric_kws.values()),
metric_scale,
)
else:
fast_intersection(
simplicial_set.row,
simplicial_set.col,
simplicial_set.data,
discrete_space,
unknown_dist,
far_dist,
)
simplicial_set.eliminate_zeros()
return reset_local_connectivity(simplicial_set) | Combine a fuzzy simplicial set with another fuzzy simplicial set
generated from discrete metric data using discrete distances. The target
data is assumed to be categorical label data (a vector of labels),
and this will update the fuzzy simplicial set to respect that label data.
TODO: optional category cardinality based weighting of distance
Parameters
----------
simplicial_set: sparse matrix
The input fuzzy simplicial set.
discrete_space: array of shape (n_samples)
The categorical labels to use in the intersection.
unknown_dist: float (optional, default 1.0)
The distance an unknown label (-1) is assumed to be from any point.
far_dist: float (optional, default 5.0)
The distance between unmatched labels.
metric: str (optional, default None)
If not None, then use this metric to determine the
distance between values.
metric_scale: float (optional, default 1.0)
If using a custom metric scale the distance values by
this value -- this controls the weighting of the
intersection. Larger values weight more toward target.
Returns
-------
simplicial_set: sparse matrix
The resulting intersected fuzzy simplicial set. | discrete_metric_simplicial_set_intersection | python | lmcinnes/umap | umap/umap_.py | https://github.com/lmcinnes/umap/blob/master/umap/umap_.py | BSD-3-Clause |
def make_epochs_per_sample(weights, n_epochs):
"""Given a set of weights and number of epochs generate the number of
epochs per sample for each weight.
Parameters
----------
weights: array of shape (n_1_simplices)
The weights of how much we wish to sample each 1-simplex.
n_epochs: int
The total number of epochs we want to train for.
Returns
-------
An array of number of epochs per sample, one for each 1-simplex.
"""
result = -1.0 * np.ones(weights.shape[0], dtype=np.float64)
n_samples = n_epochs * (weights / weights.max())
result[n_samples > 0] = float(n_epochs) / np.float64(n_samples[n_samples > 0])
return result | Given a set of weights and number of epochs generate the number of
epochs per sample for each weight.
Parameters
----------
weights: array of shape (n_1_simplices)
The weights of how much we wish to sample each 1-simplex.
n_epochs: int
The total number of epochs we want to train for.
Returns
-------
An array of number of epochs per sample, one for each 1-simplex. | make_epochs_per_sample | python | lmcinnes/umap | umap/umap_.py | https://github.com/lmcinnes/umap/blob/master/umap/umap_.py | BSD-3-Clause |
def simplicial_set_embedding(
data,
graph,
n_components,
initial_alpha,
a,
b,
gamma,
negative_sample_rate,
n_epochs,
init,
random_state,
metric,
metric_kwds,
densmap,
densmap_kwds,
output_dens,
output_metric=dist.named_distances_with_gradients["euclidean"],
output_metric_kwds={},
euclidean_output=True,
parallel=False,
verbose=False,
tqdm_kwds=None,
):
"""Perform a fuzzy simplicial set embedding, using a specified
initialisation method and then minimizing the fuzzy set cross entropy
between the 1-skeletons of the high and low dimensional fuzzy simplicial
sets.
Parameters
----------
data: array of shape (n_samples, n_features)
The source data to be embedded by UMAP.
graph: sparse matrix
The 1-skeleton of the high dimensional fuzzy simplicial set as
represented by a graph for which we require a sparse matrix for the
(weighted) adjacency matrix.
n_components: int
The dimensionality of the euclidean space into which to embed the data.
initial_alpha: float
Initial learning rate for the SGD.
a: float
Parameter of differentiable approximation of right adjoint functor
b: float
Parameter of differentiable approximation of right adjoint functor
gamma: float
Weight to apply to negative samples.
negative_sample_rate: int (optional, default 5)
The number of negative samples to select per positive sample
in the optimization process. Increasing this value will result
in greater repulsive force being applied, greater optimization
cost, but slightly more accuracy.
n_epochs: int (optional, default 0), or list of int
The number of training epochs to be used in optimizing the
low dimensional embedding. Larger values result in more accurate
embeddings. If 0 is specified a value will be selected based on
the size of the input dataset (200 for large datasets, 500 for small).
If a list of int is specified, then the intermediate embeddings at the
different epochs specified in that list are returned in
``aux_data["embedding_list"]``.
init: string
How to initialize the low dimensional embedding. Options are:
* 'spectral': use a spectral embedding of the fuzzy 1-skeleton
* 'random': assign initial embedding positions at random.
* 'pca': use the first n_components from PCA applied to the input data.
* A numpy array of initial embedding positions.
random_state: numpy RandomState or equivalent
A state capable being used as a numpy random state.
metric: string or callable
The metric used to measure distance in high dimensional space; used if
multiple connected components need to be layed out.
metric_kwds: dict
Key word arguments to be passed to the metric function; used if
multiple connected components need to be layed out.
densmap: bool
Whether to use the density-augmented objective function to optimize
the embedding according to the densMAP algorithm.
densmap_kwds: dict
Key word arguments to be used by the densMAP optimization.
output_dens: bool
Whether to output local radii in the original data and the embedding.
output_metric: function
Function returning the distance between two points in embedding space and
the gradient of the distance wrt the first argument.
output_metric_kwds: dict
Key word arguments to be passed to the output_metric function.
euclidean_output: bool
Whether to use the faster code specialised for euclidean output metrics
parallel: bool (optional, default False)
Whether to run the computation using numba parallel.
Running in parallel is non-deterministic, and is not used
if a random seed has been set, to ensure reproducibility.
verbose: bool (optional, default False)
Whether to report information on the current progress of the algorithm.
tqdm_kwds: dict
Key word arguments to be used by the tqdm progress bar.
Returns
-------
embedding: array of shape (n_samples, n_components)
The optimized of ``graph`` into an ``n_components`` dimensional
euclidean space.
aux_data: dict
Auxiliary output returned with the embedding. When densMAP extension
is turned on, this dictionary includes local radii in the original
data (``rad_orig``) and in the embedding (``rad_emb``).
"""
graph = graph.tocoo()
graph.sum_duplicates()
n_vertices = graph.shape[1]
# For smaller datasets we can use more epochs
if graph.shape[0] <= 10000:
default_epochs = 500
else:
default_epochs = 200
# Use more epochs for densMAP
if densmap:
default_epochs += 200
if n_epochs is None:
n_epochs = default_epochs
# If n_epoch is a list, get the maximum epoch to reach
n_epochs_max = max(n_epochs) if isinstance(n_epochs, list) else n_epochs
if n_epochs_max > 10:
graph.data[graph.data < (graph.data.max() / float(n_epochs_max))] = 0.0
else:
graph.data[graph.data < (graph.data.max() / float(default_epochs))] = 0.0
graph.eliminate_zeros()
if isinstance(init, str) and init == "random":
embedding = random_state.uniform(
low=-10.0, high=10.0, size=(graph.shape[0], n_components)
).astype(np.float32)
elif isinstance(init, str) and init == "pca":
if scipy.sparse.issparse(data):
pca = TruncatedSVD(n_components=n_components, random_state=random_state)
else:
pca = PCA(n_components=n_components, random_state=random_state)
embedding = pca.fit_transform(data).astype(np.float32)
embedding = noisy_scale_coords(
embedding, random_state, max_coord=10, noise=0.0001
)
elif isinstance(init, str) and init == "spectral":
embedding = spectral_layout(
data,
graph,
n_components,
random_state,
metric=metric,
metric_kwds=metric_kwds,
)
# We add a little noise to avoid local minima for optimization to come
embedding = noisy_scale_coords(
embedding, random_state, max_coord=10, noise=0.0001
)
elif isinstance(init, str) and init == "tswspectral":
embedding = tswspectral_layout(
data,
graph,
n_components,
random_state,
metric=metric,
metric_kwds=metric_kwds,
)
embedding = noisy_scale_coords(
embedding, random_state, max_coord=10, noise=0.0001
)
else:
init_data = np.array(init)
if len(init_data.shape) == 2:
if np.unique(init_data, axis=0).shape[0] < init_data.shape[0]:
tree = KDTree(init_data)
dist, ind = tree.query(init_data, k=2)
nndist = np.mean(dist[:, 1])
embedding = init_data + random_state.normal(
scale=0.001 * nndist, size=init_data.shape
).astype(np.float32)
else:
embedding = init_data
epochs_per_sample = make_epochs_per_sample(graph.data, n_epochs_max)
head = graph.row
tail = graph.col
weight = graph.data
rng_state = random_state.randint(INT32_MIN, INT32_MAX, 3).astype(np.int64)
aux_data = {}
if densmap or output_dens:
if verbose:
print(ts() + " Computing original densities")
dists = densmap_kwds["graph_dists"]
mu_sum = np.zeros(n_vertices, dtype=np.float32)
ro = np.zeros(n_vertices, dtype=np.float32)
for i in range(len(head)):
j = head[i]
k = tail[i]
D = dists[j, k] * dists[j, k] # match sq-Euclidean used for embedding
mu = graph.data[i]
ro[j] += mu * D
ro[k] += mu * D
mu_sum[j] += mu
mu_sum[k] += mu
epsilon = 1e-8
ro = np.log(epsilon + (ro / mu_sum))
if densmap:
R = (ro - np.mean(ro)) / np.std(ro)
densmap_kwds["mu"] = graph.data
densmap_kwds["mu_sum"] = mu_sum
densmap_kwds["R"] = R
if output_dens:
aux_data["rad_orig"] = ro
embedding = (
10.0
* (embedding - np.min(embedding, 0))
/ (np.max(embedding, 0) - np.min(embedding, 0))
).astype(np.float32, order="C")
if euclidean_output:
embedding = optimize_layout_euclidean(
embedding,
embedding,
head,
tail,
n_epochs,
n_vertices,
epochs_per_sample,
a,
b,
rng_state,
gamma,
initial_alpha,
negative_sample_rate,
parallel=parallel,
verbose=verbose,
densmap=densmap,
densmap_kwds=densmap_kwds,
tqdm_kwds=tqdm_kwds,
move_other=True,
)
else:
embedding = optimize_layout_generic(
embedding,
embedding,
head,
tail,
n_epochs,
n_vertices,
epochs_per_sample,
a,
b,
rng_state,
gamma,
initial_alpha,
negative_sample_rate,
output_metric,
tuple(output_metric_kwds.values()),
verbose=verbose,
tqdm_kwds=tqdm_kwds,
move_other=True,
)
if isinstance(embedding, list):
aux_data["embedding_list"] = embedding
embedding = embedding[-1].copy()
if output_dens:
if verbose:
print(ts() + " Computing embedding densities")
# Compute graph in embedding
(
knn_indices,
knn_dists,
rp_forest,
) = nearest_neighbors(
embedding,
densmap_kwds["n_neighbors"],
"euclidean",
{},
False,
random_state,
verbose=verbose,
)
emb_graph, emb_sigmas, emb_rhos, emb_dists = fuzzy_simplicial_set(
embedding,
densmap_kwds["n_neighbors"],
random_state,
"euclidean",
{},
knn_indices,
knn_dists,
verbose=verbose,
return_dists=True,
)
emb_graph = emb_graph.tocoo()
emb_graph.sum_duplicates()
emb_graph.eliminate_zeros()
n_vertices = emb_graph.shape[1]
mu_sum = np.zeros(n_vertices, dtype=np.float32)
re = np.zeros(n_vertices, dtype=np.float32)
head = emb_graph.row
tail = emb_graph.col
for i in range(len(head)):
j = head[i]
k = tail[i]
D = emb_dists[j, k]
mu = emb_graph.data[i]
re[j] += mu * D
re[k] += mu * D
mu_sum[j] += mu
mu_sum[k] += mu
epsilon = 1e-8
re = np.log(epsilon + (re / mu_sum))
aux_data["rad_emb"] = re
return embedding, aux_data | Perform a fuzzy simplicial set embedding, using a specified
initialisation method and then minimizing the fuzzy set cross entropy
between the 1-skeletons of the high and low dimensional fuzzy simplicial
sets.
Parameters
----------
data: array of shape (n_samples, n_features)
The source data to be embedded by UMAP.
graph: sparse matrix
The 1-skeleton of the high dimensional fuzzy simplicial set as
represented by a graph for which we require a sparse matrix for the
(weighted) adjacency matrix.
n_components: int
The dimensionality of the euclidean space into which to embed the data.
initial_alpha: float
Initial learning rate for the SGD.
a: float
Parameter of differentiable approximation of right adjoint functor
b: float
Parameter of differentiable approximation of right adjoint functor
gamma: float
Weight to apply to negative samples.
negative_sample_rate: int (optional, default 5)
The number of negative samples to select per positive sample
in the optimization process. Increasing this value will result
in greater repulsive force being applied, greater optimization
cost, but slightly more accuracy.
n_epochs: int (optional, default 0), or list of int
The number of training epochs to be used in optimizing the
low dimensional embedding. Larger values result in more accurate
embeddings. If 0 is specified a value will be selected based on
the size of the input dataset (200 for large datasets, 500 for small).
If a list of int is specified, then the intermediate embeddings at the
different epochs specified in that list are returned in
``aux_data["embedding_list"]``.
init: string
How to initialize the low dimensional embedding. Options are:
* 'spectral': use a spectral embedding of the fuzzy 1-skeleton
* 'random': assign initial embedding positions at random.
* 'pca': use the first n_components from PCA applied to the input data.
* A numpy array of initial embedding positions.
random_state: numpy RandomState or equivalent
A state capable being used as a numpy random state.
metric: string or callable
The metric used to measure distance in high dimensional space; used if
multiple connected components need to be layed out.
metric_kwds: dict
Key word arguments to be passed to the metric function; used if
multiple connected components need to be layed out.
densmap: bool
Whether to use the density-augmented objective function to optimize
the embedding according to the densMAP algorithm.
densmap_kwds: dict
Key word arguments to be used by the densMAP optimization.
output_dens: bool
Whether to output local radii in the original data and the embedding.
output_metric: function
Function returning the distance between two points in embedding space and
the gradient of the distance wrt the first argument.
output_metric_kwds: dict
Key word arguments to be passed to the output_metric function.
euclidean_output: bool
Whether to use the faster code specialised for euclidean output metrics
parallel: bool (optional, default False)
Whether to run the computation using numba parallel.
Running in parallel is non-deterministic, and is not used
if a random seed has been set, to ensure reproducibility.
verbose: bool (optional, default False)
Whether to report information on the current progress of the algorithm.
tqdm_kwds: dict
Key word arguments to be used by the tqdm progress bar.
Returns
-------
embedding: array of shape (n_samples, n_components)
The optimized of ``graph`` into an ``n_components`` dimensional
euclidean space.
aux_data: dict
Auxiliary output returned with the embedding. When densMAP extension
is turned on, this dictionary includes local radii in the original
data (``rad_orig``) and in the embedding (``rad_emb``). | simplicial_set_embedding | python | lmcinnes/umap | umap/umap_.py | https://github.com/lmcinnes/umap/blob/master/umap/umap_.py | BSD-3-Clause |
def init_transform(indices, weights, embedding):
"""Given indices and weights and an original embeddings
initialize the positions of new points relative to the
indices and weights (of their neighbors in the source data).
Parameters
----------
indices: array of shape (n_new_samples, n_neighbors)
The indices of the neighbors of each new sample
weights: array of shape (n_new_samples, n_neighbors)
The membership strengths of associated 1-simplices
for each of the new samples.
embedding: array of shape (n_samples, dim)
The original embedding of the source data.
Returns
-------
new_embedding: array of shape (n_new_samples, dim)
An initial embedding of the new sample points.
"""
result = np.zeros((indices.shape[0], embedding.shape[1]), dtype=np.float32)
for i in range(indices.shape[0]):
for j in range(indices.shape[1]):
for d in range(embedding.shape[1]):
result[i, d] += weights[i, j] * embedding[indices[i, j], d]
return result | Given indices and weights and an original embeddings
initialize the positions of new points relative to the
indices and weights (of their neighbors in the source data).
Parameters
----------
indices: array of shape (n_new_samples, n_neighbors)
The indices of the neighbors of each new sample
weights: array of shape (n_new_samples, n_neighbors)
The membership strengths of associated 1-simplices
for each of the new samples.
embedding: array of shape (n_samples, dim)
The original embedding of the source data.
Returns
-------
new_embedding: array of shape (n_new_samples, dim)
An initial embedding of the new sample points. | init_transform | python | lmcinnes/umap | umap/umap_.py | https://github.com/lmcinnes/umap/blob/master/umap/umap_.py | BSD-3-Clause |
def init_graph_transform(graph, embedding):
"""Given a bipartite graph representing the 1-simplices and strengths between the
new points and the original data set along with an embedding of the original points
initialize the positions of new points relative to the strengths (of their neighbors in the source data).
If a point is in our original data set it embeds at the original points coordinates.
If a point has no neighbours in our original dataset it embeds as the np.nan vector.
Otherwise a point is the weighted average of it's neighbours embedding locations.
Parameters
----------
graph: csr_matrix (n_new_samples, n_samples)
A matrix indicating the 1-simplices and their associated strengths. These strengths should
be values between zero and one and not normalized. One indicating that the new point was identical
to one of our original points.
embedding: array of shape (n_samples, dim)
The original embedding of the source data.
Returns
-------
new_embedding: array of shape (n_new_samples, dim)
An initial embedding of the new sample points.
"""
result = np.zeros((graph.shape[0], embedding.shape[1]), dtype=np.float32)
for row_index in range(graph.shape[0]):
graph_row = graph[row_index]
if graph_row.nnz == 0:
result[row_index] = np.nan
continue
row_sum = graph_row.sum()
for graph_value, col_index in zip(graph_row.data, graph_row.indices):
if graph_value == 1:
result[row_index, :] = embedding[col_index, :]
break
result[row_index] += graph_value / row_sum * embedding[col_index]
return result | Given a bipartite graph representing the 1-simplices and strengths between the
new points and the original data set along with an embedding of the original points
initialize the positions of new points relative to the strengths (of their neighbors in the source data).
If a point is in our original data set it embeds at the original points coordinates.
If a point has no neighbours in our original dataset it embeds as the np.nan vector.
Otherwise a point is the weighted average of it's neighbours embedding locations.
Parameters
----------
graph: csr_matrix (n_new_samples, n_samples)
A matrix indicating the 1-simplices and their associated strengths. These strengths should
be values between zero and one and not normalized. One indicating that the new point was identical
to one of our original points.
embedding: array of shape (n_samples, dim)
The original embedding of the source data.
Returns
-------
new_embedding: array of shape (n_new_samples, dim)
An initial embedding of the new sample points. | init_graph_transform | python | lmcinnes/umap | umap/umap_.py | https://github.com/lmcinnes/umap/blob/master/umap/umap_.py | BSD-3-Clause |
def find_ab_params(spread, min_dist):
"""Fit a, b params for the differentiable curve used in lower
dimensional fuzzy simplicial complex construction. We want the
smooth curve (from a pre-defined family with simple gradient) that
best matches an offset exponential decay.
"""
def curve(x, a, b):
return 1.0 / (1.0 + a * x ** (2 * b))
xv = np.linspace(0, spread * 3, 300)
yv = np.zeros(xv.shape)
yv[xv < min_dist] = 1.0
yv[xv >= min_dist] = np.exp(-(xv[xv >= min_dist] - min_dist) / spread)
params, covar = curve_fit(curve, xv, yv)
return params[0], params[1] | Fit a, b params for the differentiable curve used in lower
dimensional fuzzy simplicial complex construction. We want the
smooth curve (from a pre-defined family with simple gradient) that
best matches an offset exponential decay. | find_ab_params | python | lmcinnes/umap | umap/umap_.py | https://github.com/lmcinnes/umap/blob/master/umap/umap_.py | BSD-3-Clause |
def fit(self, X, y=None, ensure_all_finite=True, **kwargs):
"""Fit X into an embedded space.
Optionally use y for supervised dimension reduction.
Parameters
----------
X : array, shape (n_samples, n_features) or (n_samples, n_samples)
If the metric is 'precomputed' X must be a square distance
matrix. Otherwise it contains a sample per row. If the method
is 'exact', X may be a sparse matrix of type 'csr', 'csc'
or 'coo'.
y : array, shape (n_samples)
A target array for supervised dimension reduction. How this is
handled is determined by parameters UMAP was instantiated with.
The relevant attributes are ``target_metric`` and
``target_metric_kwds``.
ensure_all_finite : Whether to raise an error on np.inf, np.nan, pd.NA in array.
The possibilities are: - True: Force all values of array to be finite.
- False: accepts np.inf, np.nan, pd.NA in array.
- 'allow-nan': accepts only np.nan and pd.NA values in array.
Values cannot be infinite.
**kwargs : optional
Any additional keyword arguments are passed to _fit_embed_data.
"""
if self.metric in ("bit_hamming", "bit_jaccard"):
X = check_array(
X, dtype=np.uint8, order="C", ensure_all_finite=ensure_all_finite
)
else:
X = check_array(
X,
dtype=np.float32,
accept_sparse="csr",
order="C",
ensure_all_finite=ensure_all_finite,
)
self._raw_data = X
# Handle all the optional arguments, setting default
if self.a is None or self.b is None:
self._a, self._b = find_ab_params(self.spread, self.min_dist)
else:
self._a = self.a
self._b = self.b
if isinstance(self.init, np.ndarray):
init = check_array(
self.init,
dtype=np.float32,
accept_sparse=False,
ensure_all_finite=ensure_all_finite,
)
else:
init = self.init
self._initial_alpha = self.learning_rate
self.knn_indices = self.precomputed_knn[0]
self.knn_dists = self.precomputed_knn[1]
# #848: allow precomputed knn to not have a search index
if len(self.precomputed_knn) == 2:
self.knn_search_index = None
else:
self.knn_search_index = self.precomputed_knn[2]
self._validate_parameters()
if self.verbose:
print(str(self))
self._original_n_threads = numba.get_num_threads()
if self.n_jobs > 0 and self.n_jobs is not None:
numba.set_num_threads(self.n_jobs)
# Check if we should unique the data
# We've already ensured that we aren't in the precomputed case
if self.unique:
# check if the matrix is dense
if self._sparse_data:
# Call a sparse unique function
index, inverse, counts = csr_unique(X)
else:
index, inverse, counts = np.unique(
X,
return_index=True,
return_inverse=True,
return_counts=True,
axis=0,
)[1:4]
if self.verbose:
print(
"Unique=True -> Number of data points reduced from ",
X.shape[0],
" to ",
X[index].shape[0],
)
most_common = np.argmax(counts)
print(
"Most common duplicate is",
index[most_common],
" with a count of ",
counts[most_common],
)
# We'll expose an inverse map when unique=True for users to map from our internal structures to their data
self._unique_inverse_ = inverse
# If we aren't asking for unique use the full index.
# This will save special cases later.
else:
index = list(range(X.shape[0]))
inverse = list(range(X.shape[0]))
# Error check n_neighbors based on data size
if X[index].shape[0] <= self.n_neighbors:
if X[index].shape[0] == 1:
self.embedding_ = np.zeros(
(1, self.n_components)
) # needed to sklearn comparability
return self
warn(
"n_neighbors is larger than the dataset size; truncating to "
"X.shape[0] - 1"
)
self._n_neighbors = X[index].shape[0] - 1
if self.densmap:
self._densmap_kwds["n_neighbors"] = self._n_neighbors
else:
self._n_neighbors = self.n_neighbors
# Note: unless it causes issues for setting 'index', could move this to
# initial sparsity check above
if self._sparse_data and not X.has_sorted_indices:
X.sort_indices()
random_state = check_random_state(self.random_state)
if self.verbose:
print(ts(), "Construct fuzzy simplicial set")
if self.metric == "precomputed" and self._sparse_data:
# For sparse precomputed distance matrices, we just argsort the rows to find
# nearest neighbors. To make this easier, we expect matrices that are
# symmetrical (so we can find neighbors by looking at rows in isolation,
# rather than also having to consider that sample's column too).
# print("Computing KNNs for sparse precomputed distances...")
if sparse_tril(X).getnnz() != sparse_triu(X).getnnz():
raise ValueError(
"Sparse precomputed distance matrices should be symmetrical!"
)
if not np.all(X.diagonal() == 0):
raise ValueError("Non-zero distances from samples to themselves!")
if self.knn_dists is None:
self._knn_indices = np.zeros((X.shape[0], self.n_neighbors), dtype=int)
self._knn_dists = np.zeros(self._knn_indices.shape, dtype=float)
for row_id in range(X.shape[0]):
# Find KNNs row-by-row
row_data = X[row_id].data
row_indices = X[row_id].indices
if len(row_data) < self._n_neighbors:
raise ValueError(
"Some rows contain fewer than n_neighbors distances!"
)
row_nn_data_indices = np.argsort(row_data)[: self._n_neighbors]
self._knn_indices[row_id] = row_indices[row_nn_data_indices]
self._knn_dists[row_id] = row_data[row_nn_data_indices]
else:
self._knn_indices = self.knn_indices
self._knn_dists = self.knn_dists
# Disconnect any vertices farther apart than _disconnection_distance
disconnected_index = self._knn_dists >= self._disconnection_distance
self._knn_indices[disconnected_index] = -1
self._knn_dists[disconnected_index] = np.inf
edges_removed = disconnected_index.sum()
(
self.graph_,
self._sigmas,
self._rhos,
self.graph_dists_,
) = fuzzy_simplicial_set(
X[index],
self.n_neighbors,
random_state,
"precomputed",
self._metric_kwds,
self._knn_indices,
self._knn_dists,
self.angular_rp_forest,
self.set_op_mix_ratio,
self.local_connectivity,
True,
self.verbose,
self.densmap or self.output_dens,
)
# Report the number of vertices with degree 0 in our our umap.graph_
# This ensures that they were properly disconnected.
vertices_disconnected = np.sum(
np.array(self.graph_.sum(axis=1)).flatten() == 0
)
raise_disconnected_warning(
edges_removed,
vertices_disconnected,
self._disconnection_distance,
self._raw_data.shape[0],
verbose=self.verbose,
)
# Handle small cases efficiently by computing all distances
elif X[index].shape[0] < 4096 and not self.force_approximation_algorithm:
self._small_data = True
try:
# sklearn pairwise_distances fails for callable metric on sparse data
_m = self.metric if self._sparse_data else self._input_distance_func
dmat = pairwise_distances(X[index], metric=_m, **self._metric_kwds)
except (ValueError, TypeError) as e:
# metric is numba.jit'd or not supported by sklearn,
# fallback to pairwise special
if self._sparse_data:
# Get a fresh metric since we are casting to dense
if not callable(self.metric):
_m = dist.named_distances[self.metric]
dmat = dist.pairwise_special_metric(
X[index].toarray(),
metric=_m,
kwds=self._metric_kwds,
ensure_all_finite=ensure_all_finite,
)
else:
dmat = dist.pairwise_special_metric(
X[index],
metric=self._input_distance_func,
kwds=self._metric_kwds,
ensure_all_finite=ensure_all_finite,
)
else:
dmat = dist.pairwise_special_metric(
X[index],
metric=self._input_distance_func,
kwds=self._metric_kwds,
ensure_all_finite=ensure_all_finite,
)
# set any values greater than disconnection_distance to be np.inf.
# This will have no effect when _disconnection_distance is not set since it defaults to np.inf.
edges_removed = np.sum(dmat >= self._disconnection_distance)
dmat[dmat >= self._disconnection_distance] = np.inf
(
self.graph_,
self._sigmas,
self._rhos,
self.graph_dists_,
) = fuzzy_simplicial_set(
dmat,
self._n_neighbors,
random_state,
"precomputed",
self._metric_kwds,
None,
None,
self.angular_rp_forest,
self.set_op_mix_ratio,
self.local_connectivity,
True,
self.verbose,
self.densmap or self.output_dens,
)
# Report the number of vertices with degree 0 in our umap.graph_
# This ensures that they were properly disconnected.
vertices_disconnected = np.sum(
np.array(self.graph_.sum(axis=1)).flatten() == 0
)
raise_disconnected_warning(
edges_removed,
vertices_disconnected,
self._disconnection_distance,
self._raw_data.shape[0],
verbose=self.verbose,
)
else:
# Standard case
self._small_data = False
# Standard case
if self._sparse_data and self.metric in pynn_sparse_named_distances:
nn_metric = self.metric
elif not self._sparse_data and self.metric in pynn_named_distances:
nn_metric = self.metric
else:
nn_metric = self._input_distance_func
if self.knn_dists is None:
(
self._knn_indices,
self._knn_dists,
self._knn_search_index,
) = nearest_neighbors(
X[index],
self._n_neighbors,
nn_metric,
self._metric_kwds,
self.angular_rp_forest,
random_state,
self.low_memory,
use_pynndescent=True,
n_jobs=self.n_jobs,
verbose=self.verbose,
)
else:
self._knn_indices = self.knn_indices
self._knn_dists = self.knn_dists
self._knn_search_index = self.knn_search_index
# Disconnect any vertices farther apart than _disconnection_distance
disconnected_index = self._knn_dists >= self._disconnection_distance
self._knn_indices[disconnected_index] = -1
self._knn_dists[disconnected_index] = np.inf
edges_removed = disconnected_index.sum()
(
self.graph_,
self._sigmas,
self._rhos,
self.graph_dists_,
) = fuzzy_simplicial_set(
X[index],
self.n_neighbors,
random_state,
nn_metric,
self._metric_kwds,
self._knn_indices,
self._knn_dists,
self.angular_rp_forest,
self.set_op_mix_ratio,
self.local_connectivity,
True,
self.verbose,
self.densmap or self.output_dens,
)
# Report the number of vertices with degree 0 in our umap.graph_
# This ensures that they were properly disconnected.
vertices_disconnected = np.sum(
np.array(self.graph_.sum(axis=1)).flatten() == 0
)
raise_disconnected_warning(
edges_removed,
vertices_disconnected,
self._disconnection_distance,
self._raw_data.shape[0],
verbose=self.verbose,
)
# Currently not checking if any duplicate points have differing labels
# Might be worth throwing a warning...
if y is not None:
len_X = len(X) if not self._sparse_data else X.shape[0]
if len_X != len(y):
raise ValueError(
"Length of x = {len_x}, length of y = {len_y}, while it must be equal.".format(
len_x=len_X, len_y=len(y)
)
)
if self.target_metric == "string":
y_ = y[index]
else:
y_ = check_array(y, ensure_2d=False, ensure_all_finite=ensure_all_finite)[
index
]
if self.target_metric == "categorical":
if self.target_weight < 1.0:
far_dist = 2.5 * (1.0 / (1.0 - self.target_weight))
else:
far_dist = 1.0e12
self.graph_ = discrete_metric_simplicial_set_intersection(
self.graph_, y_, far_dist=far_dist
)
elif self.target_metric in dist.DISCRETE_METRICS:
if self.target_weight < 1.0:
scale = 2.5 * (1.0 / (1.0 - self.target_weight))
else:
scale = 1.0e12
# self.graph_ = discrete_metric_simplicial_set_intersection(
# self.graph_,
# y_,
# metric=self.target_metric,
# metric_kws=self.target_metric_kwds,
# metric_scale=scale
# )
metric_kws = dist.get_discrete_params(y_, self.target_metric)
self.graph_ = discrete_metric_simplicial_set_intersection(
self.graph_,
y_,
metric=self.target_metric,
metric_kws=metric_kws,
metric_scale=scale,
)
else:
if len(y_.shape) == 1:
y_ = y_.reshape(-1, 1)
if self.target_n_neighbors == -1:
target_n_neighbors = self._n_neighbors
else:
target_n_neighbors = self.target_n_neighbors
# Handle the small case as precomputed as before
if y.shape[0] < 4096:
try:
ydmat = pairwise_distances(
y_, metric=self.target_metric, **self._target_metric_kwds
)
except (TypeError, ValueError):
ydmat = dist.pairwise_special_metric(
y_,
metric=self.target_metric,
kwds=self._target_metric_kwds,
ensure_all_finite=ensure_all_finite,
)
(
target_graph,
target_sigmas,
target_rhos,
) = fuzzy_simplicial_set(
ydmat,
target_n_neighbors,
random_state,
"precomputed",
self._target_metric_kwds,
None,
None,
False,
1.0,
1.0,
False,
)
else:
# Standard case
(
target_graph,
target_sigmas,
target_rhos,
) = fuzzy_simplicial_set(
y_,
target_n_neighbors,
random_state,
self.target_metric,
self._target_metric_kwds,
None,
None,
False,
1.0,
1.0,
False,
)
# product = self.graph_.multiply(target_graph)
# # self.graph_ = 0.99 * product + 0.01 * (self.graph_ +
# # target_graph -
# # product)
# self.graph_ = product
self.graph_ = general_simplicial_set_intersection(
self.graph_, target_graph, self.target_weight
)
self.graph_ = reset_local_connectivity(self.graph_)
self._supervised = True
else:
self._supervised = False
if self.densmap or self.output_dens:
self._densmap_kwds["graph_dists"] = self.graph_dists_
if self.verbose:
print(ts(), "Construct embedding")
if self.transform_mode == "embedding":
epochs = (
self.n_epochs_list if self.n_epochs_list is not None else self.n_epochs
)
self.embedding_, aux_data = self._fit_embed_data(
self._raw_data[index],
epochs,
init,
random_state, # JH why raw data?
**kwargs,
)
if self.n_epochs_list is not None:
if "embedding_list" not in aux_data:
raise KeyError(
"No list of embedding were found in 'aux_data'. "
"It is likely the layout optimization function "
"doesn't support the list of int for 'n_epochs'."
)
else:
self.embedding_list_ = [
e[inverse] for e in aux_data["embedding_list"]
]
# Assign any points that are fully disconnected from our manifold(s) to have embedding
# coordinates of np.nan. These will be filtered by our plotting functions automatically.
# They also prevent users from being deceived a distance query to one of these points.
# Might be worth moving this into simplicial_set_embedding or _fit_embed_data
disconnected_vertices = np.array(self.graph_.sum(axis=1)).flatten() == 0
if len(disconnected_vertices) > 0:
self.embedding_[disconnected_vertices] = np.full(
self.n_components, np.nan
)
self.embedding_ = self.embedding_[inverse]
if self.output_dens:
self.rad_orig_ = aux_data["rad_orig"][inverse]
self.rad_emb_ = aux_data["rad_emb"][inverse]
if self.verbose:
print(ts() + " Finished embedding")
numba.set_num_threads(self._original_n_threads)
self._input_hash = joblib.hash(self._raw_data)
if self.transform_mode == "embedding":
# Set number of features out for sklearn API
self._n_features_out = self.embedding_.shape[1]
else:
self._n_features_out = self.graph_.shape[1]
return self | Fit X into an embedded space.
Optionally use y for supervised dimension reduction.
Parameters
----------
X : array, shape (n_samples, n_features) or (n_samples, n_samples)
If the metric is 'precomputed' X must be a square distance
matrix. Otherwise it contains a sample per row. If the method
is 'exact', X may be a sparse matrix of type 'csr', 'csc'
or 'coo'.
y : array, shape (n_samples)
A target array for supervised dimension reduction. How this is
handled is determined by parameters UMAP was instantiated with.
The relevant attributes are ``target_metric`` and
``target_metric_kwds``.
ensure_all_finite : Whether to raise an error on np.inf, np.nan, pd.NA in array.
The possibilities are: - True: Force all values of array to be finite.
- False: accepts np.inf, np.nan, pd.NA in array.
- 'allow-nan': accepts only np.nan and pd.NA values in array.
Values cannot be infinite.
**kwargs : optional
Any additional keyword arguments are passed to _fit_embed_data. | fit | python | lmcinnes/umap | umap/umap_.py | https://github.com/lmcinnes/umap/blob/master/umap/umap_.py | BSD-3-Clause |
def _fit_embed_data(self, X, n_epochs, init, random_state, **kwargs):
"""A method wrapper for simplicial_set_embedding that can be
replaced by subclasses. Arbitrary keyword arguments can be passed
through .fit() and .fit_transform().
"""
return simplicial_set_embedding(
X,
self.graph_,
self.n_components,
self._initial_alpha,
self._a,
self._b,
self.repulsion_strength,
self.negative_sample_rate,
n_epochs,
init,
random_state,
self._input_distance_func,
self._metric_kwds,
self.densmap,
self._densmap_kwds,
self.output_dens,
self._output_distance_func,
self._output_metric_kwds,
self.output_metric in ("euclidean", "l2"),
self.random_state is None,
self.verbose,
tqdm_kwds=self.tqdm_kwds,
) | A method wrapper for simplicial_set_embedding that can be
replaced by subclasses. Arbitrary keyword arguments can be passed
through .fit() and .fit_transform(). | _fit_embed_data | python | lmcinnes/umap | umap/umap_.py | https://github.com/lmcinnes/umap/blob/master/umap/umap_.py | BSD-3-Clause |
def fit_transform(self, X, y=None, ensure_all_finite=True, **kwargs):
"""Fit X into an embedded space and return that transformed
output.
Parameters
----------
X : array, shape (n_samples, n_features) or (n_samples, n_samples)
If the metric is 'precomputed' X must be a square distance
matrix. Otherwise it contains a sample per row.
y : array, shape (n_samples)
A target array for supervised dimension reduction. How this is
handled is determined by parameters UMAP was instantiated with.
The relevant attributes are ``target_metric`` and
``target_metric_kwds``.
ensure_all_finite : Whether to raise an error on np.inf, np.nan, pd.NA in array.
The possibilities are: - True: Force all values of array to be finite.
- False: accepts np.inf, np.nan, pd.NA in array.
- 'allow-nan': accepts only np.nan and pd.NA values in array.
Values cannot be infinite.
**kwargs : Any additional keyword arguments are passed to _fit_embed_data.
Returns
-------
X_new : array, shape (n_samples, n_components)
Embedding of the training data in low-dimensional space.
or a tuple (X_new, r_orig, r_emb) if ``output_dens`` flag is set,
which additionally includes:
r_orig: array, shape (n_samples)
Local radii of data points in the original data space (log-transformed).
r_emb: array, shape (n_samples)
Local radii of data points in the embedding (log-transformed).
"""
self.fit(X, y, ensure_all_finite, **kwargs)
if self.transform_mode == "embedding":
if self.output_dens:
return self.embedding_, self.rad_orig_, self.rad_emb_
else:
return self.embedding_
elif self.transform_mode == "graph":
return self.graph_
else:
raise ValueError(
"Unrecognized transform mode {}; should be one of 'embedding' or 'graph'".format(
self.transform_mode
)
) | Fit X into an embedded space and return that transformed
output.
Parameters
----------
X : array, shape (n_samples, n_features) or (n_samples, n_samples)
If the metric is 'precomputed' X must be a square distance
matrix. Otherwise it contains a sample per row.
y : array, shape (n_samples)
A target array for supervised dimension reduction. How this is
handled is determined by parameters UMAP was instantiated with.
The relevant attributes are ``target_metric`` and
``target_metric_kwds``.
ensure_all_finite : Whether to raise an error on np.inf, np.nan, pd.NA in array.
The possibilities are: - True: Force all values of array to be finite.
- False: accepts np.inf, np.nan, pd.NA in array.
- 'allow-nan': accepts only np.nan and pd.NA values in array.
Values cannot be infinite.
**kwargs : Any additional keyword arguments are passed to _fit_embed_data.
Returns
-------
X_new : array, shape (n_samples, n_components)
Embedding of the training data in low-dimensional space.
or a tuple (X_new, r_orig, r_emb) if ``output_dens`` flag is set,
which additionally includes:
r_orig: array, shape (n_samples)
Local radii of data points in the original data space (log-transformed).
r_emb: array, shape (n_samples)
Local radii of data points in the embedding (log-transformed). | fit_transform | python | lmcinnes/umap | umap/umap_.py | https://github.com/lmcinnes/umap/blob/master/umap/umap_.py | BSD-3-Clause |
def transform(self, X, ensure_all_finite=True):
"""Transform X into the existing embedded space and return that
transformed output.
Parameters
----------
X : array, shape (n_samples, n_features)
New data to be transformed.
ensure_all_finite : Whether to raise an error on np.inf, np.nan, pd.NA in array.
The possibilities are: - True: Force all values of array to be finite.
- False: accepts np.inf, np.nan, pd.NA in array.
- 'allow-nan': accepts only np.nan and pd.NA values in array.
Values cannot be infinite.
Returns
-------
X_new : array, shape (n_samples, n_components)
Embedding of the new data in low-dimensional space.
"""
# If we fit just a single instance then error
if self._raw_data.shape[0] == 1:
raise ValueError(
"Transform unavailable when model was fit with only a single data sample."
)
# If we just have the original input then short circuit things
if self.metric in ("bit_hamming", "bit_jaccard"):
X = check_array(
X, dtype=np.uint8, order="C", ensure_all_finite=ensure_all_finite
)
else:
X = check_array(
X,
dtype=np.float32,
accept_sparse="csr",
order="C",
ensure_all_finite=ensure_all_finite,
)
x_hash = joblib.hash(X)
if x_hash == self._input_hash:
if self.transform_mode == "embedding":
return self.embedding_
elif self.transform_mode == "graph":
return self.graph_
else:
raise ValueError(
"Unrecognized transform mode {}; should be one of 'embedding' or 'graph'".format(
self.transform_mode
)
)
if self.densmap:
raise NotImplementedError(
"Transforming data into an existing embedding not supported for densMAP."
)
# #848: knn_search_index is allowed to be None if not transforming new data,
# so now we must validate that if it exists it is not None
if hasattr(self, "_knn_search_index") and self._knn_search_index is None:
raise NotImplementedError(
"No search index available: transforming data"
" into an existing embedding is not supported"
)
# X = check_array(X, dtype=np.float32, order="C", accept_sparse="csr")
random_state = check_random_state(self.transform_seed)
rng_state = random_state.randint(INT32_MIN, INT32_MAX, 3).astype(np.int64)
if self.metric == "precomputed":
warn(
"Transforming new data with precomputed metric. "
"We are assuming the input data is a matrix of distances from the new points "
"to the points in the training set. If the input matrix is sparse, it should "
"contain distances from the new points to their nearest neighbours "
"or approximate nearest neighbours in the training set."
)
assert X.shape[1] == self._raw_data.shape[0]
if scipy.sparse.issparse(X):
indices = np.full(
(X.shape[0], self._n_neighbors), dtype=np.int32, fill_value=-1
)
dists = np.full_like(indices, dtype=np.float32, fill_value=-1)
for i in range(X.shape[0]):
data_indices = np.argsort(X[i].data)
if len(data_indices) < self._n_neighbors:
raise ValueError(
f"Need at least n_neighbors ({self.n_neighbors}) distances for each row!"
)
indices[i] = X[i].indices[data_indices[: self._n_neighbors]]
dists[i] = X[i].data[data_indices[: self._n_neighbors]]
else:
indices = np.argsort(X, axis=1)[:, : self._n_neighbors].astype(np.int32)
dists = np.take_along_axis(X, indices, axis=1)
assert np.min(indices) >= 0 and np.min(dists) >= 0.0
elif self._small_data:
try:
# sklearn pairwise_distances fails for callable metric on sparse data
_m = self.metric if self._sparse_data else self._input_distance_func
dmat = pairwise_distances(
X, self._raw_data, metric=_m, **self._metric_kwds
)
except (TypeError, ValueError):
# metric is numba.jit'd or not supported by sklearn,
# fallback to pairwise special
if self._sparse_data:
# Get a fresh metric since we are casting to dense
if not callable(self.metric):
_m = dist.named_distances[self.metric]
dmat = dist.pairwise_special_metric(
X.toarray(),
self._raw_data.toarray(),
metric=_m,
kwds=self._metric_kwds,
ensure_all_finite=ensure_all_finite,
)
else:
dmat = dist.pairwise_special_metric(
X,
self._raw_data,
metric=self._input_distance_func,
kwds=self._metric_kwds,
ensure_all_finite=ensure_all_finite,
)
else:
dmat = dist.pairwise_special_metric(
X,
self._raw_data,
metric=self._input_distance_func,
kwds=self._metric_kwds,
ensure_all_finite=ensure_all_finite,
)
indices = np.argpartition(dmat, self._n_neighbors)[:, : self._n_neighbors]
dmat_shortened = submatrix(dmat, indices, self._n_neighbors)
indices_sorted = np.argsort(dmat_shortened)
indices = submatrix(indices, indices_sorted, self._n_neighbors)
dists = submatrix(dmat_shortened, indices_sorted, self._n_neighbors)
else:
epsilon = 0.24 if self._knn_search_index._angular_trees else 0.12
indices, dists = self._knn_search_index.query(
X, self.n_neighbors, epsilon=epsilon
)
dists = dists.astype(np.float32, order="C")
# Remove any nearest neighbours who's distances are greater than our disconnection_distance
indices[dists >= self._disconnection_distance] = -1
adjusted_local_connectivity = max(0.0, self.local_connectivity - 1.0)
sigmas, rhos = smooth_knn_dist(
dists,
float(self._n_neighbors),
local_connectivity=float(adjusted_local_connectivity),
)
rows, cols, vals, dists = compute_membership_strengths(
indices, dists, sigmas, rhos, bipartite=True
)
graph = scipy.sparse.coo_matrix(
(vals, (rows, cols)), shape=(X.shape[0], self._raw_data.shape[0])
)
if self.transform_mode == "graph":
return graph
# This was a very specially constructed graph with constant degree.
# That lets us do fancy unpacking by reshaping the csr matrix indices
# and data. Doing so relies on the constant degree assumption!
# csr_graph = normalize(graph.tocsr(), norm="l1")
# inds = csr_graph.indices.reshape(X.shape[0], self._n_neighbors)
# weights = csr_graph.data.reshape(X.shape[0], self._n_neighbors)
# embedding = init_transform(inds, weights, self.embedding_)
# This is less fast code than the above numba.jit'd code.
# It handles the fact that our nearest neighbour graph can now contain variable numbers of vertices.
csr_graph = graph.tocsr()
csr_graph.eliminate_zeros()
embedding = init_graph_transform(csr_graph, self.embedding_)
if self.n_epochs is None:
# For smaller datasets we can use more epochs
if graph.shape[0] <= 10000:
n_epochs = 100
else:
n_epochs = 30
else:
n_epochs = int(self.n_epochs // 3.0)
graph.data[graph.data < (graph.data.max() / float(n_epochs))] = 0.0
graph.eliminate_zeros()
epochs_per_sample = make_epochs_per_sample(graph.data, n_epochs)
head = graph.row
tail = graph.col
weight = graph.data
# optimize_layout = make_optimize_layout(
# self._output_distance_func,
# tuple(self.output_metric_kwds.values()),
# )
if self.output_metric == "euclidean":
embedding = optimize_layout_euclidean(
embedding,
self.embedding_.astype(np.float32, copy=True), # Fixes #179 & #217,
head,
tail,
n_epochs,
graph.shape[1],
epochs_per_sample,
self._a,
self._b,
rng_state,
self.repulsion_strength,
self._initial_alpha / 4.0,
self.negative_sample_rate,
self.random_state is None,
verbose=self.verbose,
tqdm_kwds=self.tqdm_kwds,
)
else:
embedding = optimize_layout_generic(
embedding,
self.embedding_.astype(np.float32, copy=True), # Fixes #179 & #217
head,
tail,
n_epochs,
graph.shape[1],
epochs_per_sample,
self._a,
self._b,
rng_state,
self.repulsion_strength,
self._initial_alpha / 4.0,
self.negative_sample_rate,
self._output_distance_func,
tuple(self._output_metric_kwds.values()),
verbose=self.verbose,
tqdm_kwds=self.tqdm_kwds,
)
return embedding | Transform X into the existing embedded space and return that
transformed output.
Parameters
----------
X : array, shape (n_samples, n_features)
New data to be transformed.
ensure_all_finite : Whether to raise an error on np.inf, np.nan, pd.NA in array.
The possibilities are: - True: Force all values of array to be finite.
- False: accepts np.inf, np.nan, pd.NA in array.
- 'allow-nan': accepts only np.nan and pd.NA values in array.
Values cannot be infinite.
Returns
-------
X_new : array, shape (n_samples, n_components)
Embedding of the new data in low-dimensional space. | transform | python | lmcinnes/umap | umap/umap_.py | https://github.com/lmcinnes/umap/blob/master/umap/umap_.py | BSD-3-Clause |
def inverse_transform(self, X):
"""Transform X in the existing embedded space back into the input
data space and return that transformed output.
Parameters
----------
X : array, shape (n_samples, n_components)
New points to be inverse transformed.
Returns
-------
X_new : array, shape (n_samples, n_features)
Generated data points new data in data space.
"""
if self._sparse_data:
raise ValueError("Inverse transform not available for sparse input.")
elif self._inverse_distance_func is None:
raise ValueError("Inverse transform not available for given metric.")
elif self.densmap:
raise ValueError("Inverse transform not available for densMAP.")
elif self.n_components >= 8:
warn(
"Inverse transform works best with low dimensional embeddings."
" Results may be poor, or this approach to inverse transform"
" may fail altogether! If you need a high dimensional latent"
" space and inverse transform operations consider using an"
" autoencoder."
)
elif self.transform_mode == "graph":
raise ValueError(
"Inverse transform not available for transform_mode = 'graph'"
)
X = check_array(X, dtype=np.float32, order="C")
random_state = check_random_state(self.transform_seed)
rng_state = random_state.randint(INT32_MIN, INT32_MAX, 3).astype(np.int64)
# build Delaunay complex (Does this not assume a roughly euclidean output metric)?
deltri = scipy.spatial.Delaunay(
self.embedding_, incremental=True, qhull_options="QJ"
)
neighbors = deltri.simplices[deltri.find_simplex(X)]
adjmat = scipy.sparse.lil_matrix(
(self.embedding_.shape[0], self.embedding_.shape[0]), dtype=int
)
for i in np.arange(0, deltri.simplices.shape[0]):
for j in deltri.simplices[i]:
if j < self.embedding_.shape[0]:
idx = deltri.simplices[i][
deltri.simplices[i] < self.embedding_.shape[0]
]
adjmat[j, idx] = 1
adjmat[idx, j] = 1
adjmat = scipy.sparse.csr_matrix(adjmat)
min_vertices = min(self._raw_data.shape[-1], self._raw_data.shape[0])
neighborhood = [
breadth_first_search(adjmat, v[0], min_vertices=min_vertices)
for v in neighbors
]
if callable(self.output_metric):
# need to create another numba.jit-able wrapper for callable
# output_metrics that return a tuple (already checked that it does
# during param validation in `fit` method)
_out_m = self.output_metric
@numba.njit(fastmath=True)
def _output_dist_only(x, y, *kwds):
return _out_m(x, y, *kwds)[0]
dist_only_func = _output_dist_only
elif self.output_metric in dist.named_distances.keys():
dist_only_func = dist.named_distances[self.output_metric]
else:
# shouldn't really ever get here because of checks already performed,
# but works as a failsafe in case attr was altered manually after fitting
raise ValueError(
"Unrecognized output metric: {}".format(self.output_metric)
)
dist_args = tuple(self._output_metric_kwds.values())
distances = [
np.array(
[
dist_only_func(X[i], self.embedding_[nb], *dist_args)
for nb in neighborhood[i]
]
)
for i in range(X.shape[0])
]
idx = np.array([np.argsort(e)[:min_vertices] for e in distances])
dists_output_space = np.array(
[distances[i][idx[i]] for i in range(len(distances))]
)
indices = np.array([neighborhood[i][idx[i]] for i in range(len(neighborhood))])
rows, cols, distances = np.array(
[
[i, indices[i, j], dists_output_space[i, j]]
for i in range(indices.shape[0])
for j in range(min_vertices)
]
).T
# calculate membership strength of each edge
weights = 1 / (1 + self._a * distances ** (2 * self._b))
# compute 1-skeleton
# convert 1-skeleton into coo_matrix adjacency matrix
graph = scipy.sparse.coo_matrix(
(weights, (rows, cols)), shape=(X.shape[0], self._raw_data.shape[0])
)
# That lets us do fancy unpacking by reshaping the csr matrix indices
# and data. Doing so relies on the constant degree assumption!
# csr_graph = graph.tocsr()
csr_graph = normalize(graph.tocsr(), norm="l1")
inds = csr_graph.indices.reshape(X.shape[0], min_vertices)
weights = csr_graph.data.reshape(X.shape[0], min_vertices)
inv_transformed_points = init_transform(inds, weights, self._raw_data)
if self.n_epochs is None:
# For smaller datasets we can use more epochs
if graph.shape[0] <= 10000:
n_epochs = 100
else:
n_epochs = 30
else:
n_epochs = int(self.n_epochs // 3.0)
# graph.data[graph.data < (graph.data.max() / float(n_epochs))] = 0.0
# graph.eliminate_zeros()
epochs_per_sample = make_epochs_per_sample(graph.data, n_epochs)
head = graph.row
tail = graph.col
weight = graph.data
inv_transformed_points = optimize_layout_inverse(
inv_transformed_points,
self._raw_data,
head,
tail,
weight,
self._sigmas,
self._rhos,
n_epochs,
graph.shape[1],
epochs_per_sample,
self._a,
self._b,
rng_state,
self.repulsion_strength,
self._initial_alpha / 4.0,
self.negative_sample_rate,
self._inverse_distance_func,
tuple(self._metric_kwds.values()),
verbose=self.verbose,
tqdm_kwds=self.tqdm_kwds,
)
return inv_transformed_points | Transform X in the existing embedded space back into the input
data space and return that transformed output.
Parameters
----------
X : array, shape (n_samples, n_components)
New points to be inverse transformed.
Returns
-------
X_new : array, shape (n_samples, n_features)
Generated data points new data in data space. | inverse_transform | python | lmcinnes/umap | umap/umap_.py | https://github.com/lmcinnes/umap/blob/master/umap/umap_.py | BSD-3-Clause |
def ll_dirichlet(data1, data2):
"""The symmetric relative log likelihood of rolling data2 vs data1
in n trials on a die that rolled data1 in sum(data1) trials.
..math::
D(data1, data2) = DirichletMultinomail(data2 | data1)
"""
n1 = np.sum(data1)
n2 = np.sum(data2)
log_b = 0.0
self_denom1 = 0.0
self_denom2 = 0.0
for i in range(data1.shape[0]):
if data1[i] * data2[i] > 0.9:
log_b += log_beta(data1[i], data2[i])
self_denom1 += log_single_beta(data1[i])
self_denom2 += log_single_beta(data2[i])
else:
if data1[i] > 0.9:
self_denom1 += log_single_beta(data1[i])
if data2[i] > 0.9:
self_denom2 += log_single_beta(data2[i])
return np.sqrt(
1.0 / n2 * (log_b - log_beta(n1, n2) - (self_denom2 - log_single_beta(n2)))
+ 1.0 / n1 * (log_b - log_beta(n2, n1) - (self_denom1 - log_single_beta(n1)))
) | The symmetric relative log likelihood of rolling data2 vs data1
in n trials on a die that rolled data1 in sum(data1) trials.
..math::
D(data1, data2) = DirichletMultinomail(data2 | data1) | ll_dirichlet | python | lmcinnes/umap | umap/distances.py | https://github.com/lmcinnes/umap/blob/master/umap/distances.py | BSD-3-Clause |
def symmetric_kl_grad(x, y, z=1e-11): # pragma: no cover
"""
symmetrized KL divergence and its gradient
"""
n = x.shape[0]
x_sum = 0.0
y_sum = 0.0
kl1 = 0.0
kl2 = 0.0
for i in range(n):
x[i] += z
x_sum += x[i]
y[i] += z
y_sum += y[i]
for i in range(n):
x[i] /= x_sum
y[i] /= y_sum
for i in range(n):
kl1 += x[i] * np.log(x[i] / y[i])
kl2 += y[i] * np.log(y[i] / x[i])
dist = (kl1 + kl2) / 2
grad = (np.log(y / x) - (x / y) + 1) / 2
return dist, grad | symmetrized KL divergence and its gradient | symmetric_kl_grad | python | lmcinnes/umap | umap/distances.py | https://github.com/lmcinnes/umap/blob/master/umap/distances.py | BSD-3-Clause |
def fast_knn_indices(X, n_neighbors):
"""A fast computation of knn indices.
Parameters
----------
X: array of shape (n_samples, n_features)
The input data to compute the k-neighbor indices of.
n_neighbors: int
The number of nearest neighbors to compute for each sample in ``X``.
Returns
-------
knn_indices: array of shape (n_samples, n_neighbors)
The indices on the ``n_neighbors`` closest points in the dataset.
"""
knn_indices = np.empty((X.shape[0], n_neighbors), dtype=np.int32)
for row in numba.prange(X.shape[0]):
# v = np.argsort(X[row]) # Need to call argsort this way for numba
v = X[row].argsort(kind="quicksort")
v = v[:n_neighbors]
knn_indices[row] = v
return knn_indices | A fast computation of knn indices.
Parameters
----------
X: array of shape (n_samples, n_features)
The input data to compute the k-neighbor indices of.
n_neighbors: int
The number of nearest neighbors to compute for each sample in ``X``.
Returns
-------
knn_indices: array of shape (n_samples, n_neighbors)
The indices on the ``n_neighbors`` closest points in the dataset. | fast_knn_indices | python | lmcinnes/umap | umap/utils.py | https://github.com/lmcinnes/umap/blob/master/umap/utils.py | BSD-3-Clause |
def tau_rand_int(state):
"""A fast (pseudo)-random number generator.
Parameters
----------
state: array of int64, shape (3,)
The internal state of the rng
Returns
-------
A (pseudo)-random int32 value
"""
state[0] = (((state[0] & 4294967294) << 12) & 0xFFFFFFFF) ^ (
(((state[0] << 13) & 0xFFFFFFFF) ^ state[0]) >> 19
)
state[1] = (((state[1] & 4294967288) << 4) & 0xFFFFFFFF) ^ (
(((state[1] << 2) & 0xFFFFFFFF) ^ state[1]) >> 25
)
state[2] = (((state[2] & 4294967280) << 17) & 0xFFFFFFFF) ^ (
(((state[2] << 3) & 0xFFFFFFFF) ^ state[2]) >> 11
)
return state[0] ^ state[1] ^ state[2] | A fast (pseudo)-random number generator.
Parameters
----------
state: array of int64, shape (3,)
The internal state of the rng
Returns
-------
A (pseudo)-random int32 value | tau_rand_int | python | lmcinnes/umap | umap/utils.py | https://github.com/lmcinnes/umap/blob/master/umap/utils.py | BSD-3-Clause |
def tau_rand(state):
"""A fast (pseudo)-random number generator for floats in the range [0,1]
Parameters
----------
state: array of int64, shape (3,)
The internal state of the rng
Returns
-------
A (pseudo)-random float32 in the interval [0, 1]
"""
integer = tau_rand_int(state)
return abs(float(integer) / 0x7FFFFFFF) | A fast (pseudo)-random number generator for floats in the range [0,1]
Parameters
----------
state: array of int64, shape (3,)
The internal state of the rng
Returns
-------
A (pseudo)-random float32 in the interval [0, 1] | tau_rand | python | lmcinnes/umap | umap/utils.py | https://github.com/lmcinnes/umap/blob/master/umap/utils.py | BSD-3-Clause |
def norm(vec):
"""Compute the (standard l2) norm of a vector.
Parameters
----------
vec: array of shape (dim,)
Returns
-------
The l2 norm of vec.
"""
result = 0.0
for i in range(vec.shape[0]):
result += vec[i] ** 2
return np.sqrt(result) | Compute the (standard l2) norm of a vector.
Parameters
----------
vec: array of shape (dim,)
Returns
-------
The l2 norm of vec. | norm | python | lmcinnes/umap | umap/utils.py | https://github.com/lmcinnes/umap/blob/master/umap/utils.py | BSD-3-Clause |
def submatrix(dmat, indices_col, n_neighbors):
"""Return a submatrix given an orginal matrix and the indices to keep.
Parameters
----------
dmat: array, shape (n_samples, n_samples)
Original matrix.
indices_col: array, shape (n_samples, n_neighbors)
Indices to keep. Each row consists of the indices of the columns.
n_neighbors: int
Number of neighbors.
Returns
-------
submat: array, shape (n_samples, n_neighbors)
The corresponding submatrix.
"""
n_samples_transform, n_samples_fit = dmat.shape
submat = np.zeros((n_samples_transform, n_neighbors), dtype=dmat.dtype)
for i in numba.prange(n_samples_transform):
for j in numba.prange(n_neighbors):
submat[i, j] = dmat[i, indices_col[i, j]]
return submat | Return a submatrix given an orginal matrix and the indices to keep.
Parameters
----------
dmat: array, shape (n_samples, n_samples)
Original matrix.
indices_col: array, shape (n_samples, n_neighbors)
Indices to keep. Each row consists of the indices of the columns.
n_neighbors: int
Number of neighbors.
Returns
-------
submat: array, shape (n_samples, n_neighbors)
The corresponding submatrix. | submatrix | python | lmcinnes/umap | umap/utils.py | https://github.com/lmcinnes/umap/blob/master/umap/utils.py | BSD-3-Clause |
def csr_unique(matrix, return_index=True, return_inverse=True, return_counts=True):
"""Find the unique elements of a sparse csr matrix.
We don't explicitly construct the unique matrix leaving that to the user
who may not want to duplicate a massive array in memory.
Returns the indices of the input array that give the unique values.
Returns the indices of the unique array that reconstructs the input array.
Returns the number of times each unique row appears in the input matrix.
matrix: a csr matrix
return_index = bool, optional
If true, return the row indices of 'matrix'
return_inverse: bool, optional
If true, return the indices of the unique array that can be
used to reconstruct 'matrix'.
return_counts = bool, optional
If true, returns the number of times each unique item appears in 'matrix'
The unique matrix can computed via
unique_matrix = matrix[index]
and the original matrix reconstructed via
unique_matrix[inverse]
"""
lil_matrix = matrix.tolil()
rows = np.asarray(
[tuple(x + y) for x, y in zip(lil_matrix.rows, lil_matrix.data)], dtype=object
)
return_values = return_counts + return_inverse + return_index
return np.unique(
rows,
return_index=return_index,
return_inverse=return_inverse,
return_counts=return_counts,
)[1 : (return_values + 1)] | Find the unique elements of a sparse csr matrix.
We don't explicitly construct the unique matrix leaving that to the user
who may not want to duplicate a massive array in memory.
Returns the indices of the input array that give the unique values.
Returns the indices of the unique array that reconstructs the input array.
Returns the number of times each unique row appears in the input matrix.
matrix: a csr matrix
return_index = bool, optional
If true, return the row indices of 'matrix'
return_inverse: bool, optional
If true, return the indices of the unique array that can be
used to reconstruct 'matrix'.
return_counts = bool, optional
If true, returns the number of times each unique item appears in 'matrix'
The unique matrix can computed via
unique_matrix = matrix[index]
and the original matrix reconstructed via
unique_matrix[inverse] | csr_unique | python | lmcinnes/umap | umap/utils.py | https://github.com/lmcinnes/umap/blob/master/umap/utils.py | BSD-3-Clause |
def disconnected_vertices(model):
"""
Returns a boolean vector indicating which vertices are disconnected from the umap graph.
These vertices will often be scattered across the space and make it difficult to focus on the main
manifold. They can either be filtered and have UMAP re-run or simply filtered from the interactive plotting tool
via the subset_points parameter.
Use ~disconnected_vertices(model) to only plot the connected points.
Parameters
----------
model: a trained UMAP model
Returns
-------
A boolean vector indicating which points are disconnected
"""
check_is_fitted(model, "graph_")
if model.unique:
vertices_disconnected = (
np.array(model.graph_[model._unique_inverse_].sum(axis=1)).flatten() == 0
)
else:
vertices_disconnected = np.array(model.graph_.sum(axis=1)).flatten() == 0
return vertices_disconnected | Returns a boolean vector indicating which vertices are disconnected from the umap graph.
These vertices will often be scattered across the space and make it difficult to focus on the main
manifold. They can either be filtered and have UMAP re-run or simply filtered from the interactive plotting tool
via the subset_points parameter.
Use ~disconnected_vertices(model) to only plot the connected points.
Parameters
----------
model: a trained UMAP model
Returns
-------
A boolean vector indicating which points are disconnected | disconnected_vertices | python | lmcinnes/umap | umap/utils.py | https://github.com/lmcinnes/umap/blob/master/umap/utils.py | BSD-3-Clause |
def average_nn_distance(dist_matrix):
"""Calculate the average distance to each points nearest neighbors.
Parameters
----------
dist_matrix: a csr_matrix
A distance matrix (usually umap_model.graph_)
Returns
-------
An array with the average distance to each points nearest neighbors
"""
(row_idx, col_idx, val) = scipy.sparse.find(dist_matrix)
# Count/sum is done per row
count_non_zero_elems = np.bincount(row_idx)
sum_non_zero_elems = np.bincount(row_idx, weights=val)
averages = sum_non_zero_elems / count_non_zero_elems
if any(np.isnan(averages)):
warn(
"Embedding contains disconnected vertices which will be ignored."
"Use umap.utils.disconnected_vertices() to identify them."
)
return averages | Calculate the average distance to each points nearest neighbors.
Parameters
----------
dist_matrix: a csr_matrix
A distance matrix (usually umap_model.graph_)
Returns
-------
An array with the average distance to each points nearest neighbors | average_nn_distance | python | lmcinnes/umap | umap/utils.py | https://github.com/lmcinnes/umap/blob/master/umap/utils.py | BSD-3-Clause |
def component_layout(
data,
n_components,
component_labels,
dim,
random_state,
metric="euclidean",
metric_kwds={},
):
"""Provide a layout relating the separate connected components. This is done
by taking the centroid of each component and then performing a spectral embedding
of the centroids.
Parameters
----------
data: array of shape (n_samples, n_features)
The source data -- required so we can generate centroids for each
connected component of the graph.
n_components: int
The number of distinct components to be layed out.
component_labels: array of shape (n_samples)
For each vertex in the graph the label of the component to
which the vertex belongs.
dim: int
The chosen embedding dimension.
metric: string or callable (optional, default 'euclidean')
The metric used to measure distances among the source data points.
metric_kwds: dict (optional, default {})
Keyword arguments to be passed to the metric function.
If metric is 'precomputed', 'linkage' keyword can be used to specify
'average', 'complete', or 'single' linkage. Default is 'average'
Returns
-------
component_embedding: array of shape (n_components, dim)
The ``dim``-dimensional embedding of the ``n_components``-many
connected components.
"""
if data is None:
# We don't have data to work with; just guess
return np.random.random(size=(n_components, dim)) * 10.0
component_centroids = np.empty((n_components, data.shape[1]), dtype=np.float64)
if metric == "precomputed":
# cannot compute centroids from precomputed distances
# instead, compute centroid distances using linkage
distance_matrix = np.zeros((n_components, n_components), dtype=np.float64)
linkage = metric_kwds.get("linkage", "average")
if linkage == "average":
linkage = np.mean
elif linkage == "complete":
linkage = np.max
elif linkage == "single":
linkage = np.min
else:
raise ValueError(
"Unrecognized linkage '%s'. Please choose from "
"'average', 'complete', or 'single'" % linkage
)
for c_i in range(n_components):
dm_i = data[component_labels == c_i]
for c_j in range(c_i + 1, n_components):
dist = linkage(dm_i[:, component_labels == c_j])
distance_matrix[c_i, c_j] = dist
distance_matrix[c_j, c_i] = dist
else:
for label in range(n_components):
component_centroids[label] = data[component_labels == label].mean(axis=0)
if scipy.sparse.isspmatrix(component_centroids):
warn(
"Forcing component centroids to dense; if you are running out of "
"memory then consider increasing n_neighbors."
)
component_centroids = component_centroids.toarray()
if metric in SPECIAL_METRICS:
distance_matrix = pairwise_special_metric(
component_centroids,
metric=metric,
kwds=metric_kwds,
)
elif metric in SPARSE_SPECIAL_METRICS:
distance_matrix = pairwise_special_metric(
component_centroids,
metric=SPARSE_SPECIAL_METRICS[metric],
kwds=metric_kwds,
)
else:
if callable(metric) and scipy.sparse.isspmatrix(data):
function_to_name_mapping = {
sparse_named_distances[k]: k
for k in set(SKLEARN_PAIRWISE_VALID_METRICS)
& set(sparse_named_distances.keys())
}
try:
metric_name = function_to_name_mapping[metric]
except KeyError:
raise NotImplementedError(
"Multicomponent layout for custom "
"sparse metrics is not implemented at "
"this time."
)
distance_matrix = pairwise_distances(
component_centroids, metric=metric_name, **metric_kwds
)
else:
distance_matrix = pairwise_distances(
component_centroids, metric=metric, **metric_kwds
)
affinity_matrix = np.exp(-(distance_matrix**2))
component_embedding = SpectralEmbedding(
n_components=dim, affinity="precomputed", random_state=random_state
).fit_transform(affinity_matrix)
component_embedding /= component_embedding.max()
return component_embedding | Provide a layout relating the separate connected components. This is done
by taking the centroid of each component and then performing a spectral embedding
of the centroids.
Parameters
----------
data: array of shape (n_samples, n_features)
The source data -- required so we can generate centroids for each
connected component of the graph.
n_components: int
The number of distinct components to be layed out.
component_labels: array of shape (n_samples)
For each vertex in the graph the label of the component to
which the vertex belongs.
dim: int
The chosen embedding dimension.
metric: string or callable (optional, default 'euclidean')
The metric used to measure distances among the source data points.
metric_kwds: dict (optional, default {})
Keyword arguments to be passed to the metric function.
If metric is 'precomputed', 'linkage' keyword can be used to specify
'average', 'complete', or 'single' linkage. Default is 'average'
Returns
-------
component_embedding: array of shape (n_components, dim)
The ``dim``-dimensional embedding of the ``n_components``-many
connected components. | component_layout | python | lmcinnes/umap | umap/spectral.py | https://github.com/lmcinnes/umap/blob/master/umap/spectral.py | BSD-3-Clause |
def multi_component_layout(
data,
graph,
n_components,
component_labels,
dim,
random_state,
metric="euclidean",
metric_kwds={},
init="random",
tol=0.0,
maxiter=0,
):
"""Specialised layout algorithm for dealing with graphs with many connected components.
This will first find relative positions for the components by spectrally embedding
their centroids, then spectrally embed each individual connected component positioning
them according to the centroid embeddings. This provides a decent embedding of each
component while placing the components in good relative positions to one another.
Parameters
----------
data: array of shape (n_samples, n_features)
The source data -- required so we can generate centroids for each
connected component of the graph.
graph: sparse matrix
The adjacency matrix of the graph to be embedded.
n_components: int
The number of distinct components to be layed out.
component_labels: array of shape (n_samples)
For each vertex in the graph the label of the component to
which the vertex belongs.
dim: int
The chosen embedding dimension.
metric: string or callable (optional, default 'euclidean')
The metric used to measure distances among the source data points.
metric_kwds: dict (optional, default {})
Keyword arguments to be passed to the metric function.
init: string, either "random" or "tsvd"
Indicates to initialize the eigensolver. Use "random" (the default) to
use uniformly distributed random initialization; use "tsvd" to warm-start the
eigensolver with singular vectors of the Laplacian associated to the largest
singular values. This latter option also forces usage of the LOBPCG eigensolver;
with the former, ARPACK's solver ``eigsh`` will be used for smaller Laplacians.
tol: float, default chosen by implementation
Stopping tolerance for the numerical algorithm computing the embedding.
maxiter: int, default chosen by implementation
Number of iterations the numerical algorithm will go through at most as it
attempts to compute the embedding.
Returns
-------
embedding: array of shape (n_samples, dim)
The initial embedding of ``graph``.
"""
result = np.empty((graph.shape[0], dim), dtype=np.float32)
if n_components > 2 * dim:
meta_embedding = component_layout(
data,
n_components,
component_labels,
dim,
random_state,
metric=metric,
metric_kwds=metric_kwds,
)
else:
k = int(np.ceil(n_components / 2.0))
base = np.hstack([np.eye(k), np.zeros((k, dim - k))])
meta_embedding = np.vstack([base, -base])[:n_components]
for label in range(n_components):
component_graph = graph.tocsr()[component_labels == label, :].tocsc()
component_graph = component_graph[:, component_labels == label].tocoo()
distances = pairwise_distances([meta_embedding[label]], meta_embedding)
data_range = distances[distances > 0.0].min() / 2.0
if component_graph.shape[0] < 2 * dim or component_graph.shape[0] <= dim + 1:
result[component_labels == label] = (
random_state.uniform(
low=-data_range,
high=data_range,
size=(component_graph.shape[0], dim),
)
+ meta_embedding[label]
)
else:
component_embedding = _spectral_layout(
data=None,
graph=component_graph,
dim=dim,
random_state=random_state,
metric=metric,
metric_kwds=metric_kwds,
init=init,
tol=tol,
maxiter=maxiter,
)
expansion = data_range / np.max(np.abs(component_embedding))
component_embedding *= expansion
result[component_labels == label] = (
component_embedding + meta_embedding[label]
)
return result | Specialised layout algorithm for dealing with graphs with many connected components.
This will first find relative positions for the components by spectrally embedding
their centroids, then spectrally embed each individual connected component positioning
them according to the centroid embeddings. This provides a decent embedding of each
component while placing the components in good relative positions to one another.
Parameters
----------
data: array of shape (n_samples, n_features)
The source data -- required so we can generate centroids for each
connected component of the graph.
graph: sparse matrix
The adjacency matrix of the graph to be embedded.
n_components: int
The number of distinct components to be layed out.
component_labels: array of shape (n_samples)
For each vertex in the graph the label of the component to
which the vertex belongs.
dim: int
The chosen embedding dimension.
metric: string or callable (optional, default 'euclidean')
The metric used to measure distances among the source data points.
metric_kwds: dict (optional, default {})
Keyword arguments to be passed to the metric function.
init: string, either "random" or "tsvd"
Indicates to initialize the eigensolver. Use "random" (the default) to
use uniformly distributed random initialization; use "tsvd" to warm-start the
eigensolver with singular vectors of the Laplacian associated to the largest
singular values. This latter option also forces usage of the LOBPCG eigensolver;
with the former, ARPACK's solver ``eigsh`` will be used for smaller Laplacians.
tol: float, default chosen by implementation
Stopping tolerance for the numerical algorithm computing the embedding.
maxiter: int, default chosen by implementation
Number of iterations the numerical algorithm will go through at most as it
attempts to compute the embedding.
Returns
-------
embedding: array of shape (n_samples, dim)
The initial embedding of ``graph``. | multi_component_layout | python | lmcinnes/umap | umap/spectral.py | https://github.com/lmcinnes/umap/blob/master/umap/spectral.py | BSD-3-Clause |
def spectral_layout(
data,
graph,
dim,
random_state,
metric="euclidean",
metric_kwds={},
tol=0.0,
maxiter=0,
):
"""
Given a graph compute the spectral embedding of the graph. This is
simply the eigenvectors of the laplacian of the graph. Here we use the
normalized laplacian.
Parameters
----------
data: array of shape (n_samples, n_features)
The source data
graph: sparse matrix
The (weighted) adjacency matrix of the graph as a sparse matrix.
dim: int
The dimension of the space into which to embed.
random_state: numpy RandomState or equivalent
A state capable being used as a numpy random state.
tol: float, default chosen by implementation
Stopping tolerance for the numerical algorithm computing the embedding.
maxiter: int, default chosen by implementation
Number of iterations the numerical algorithm will go through at most as it
attempts to compute the embedding.
Returns
-------
embedding: array of shape (n_vertices, dim)
The spectral embedding of the graph.
"""
return _spectral_layout(
data=data,
graph=graph,
dim=dim,
random_state=random_state,
metric=metric,
metric_kwds=metric_kwds,
init="random",
tol=tol,
maxiter=maxiter,
) | Given a graph compute the spectral embedding of the graph. This is
simply the eigenvectors of the laplacian of the graph. Here we use the
normalized laplacian.
Parameters
----------
data: array of shape (n_samples, n_features)
The source data
graph: sparse matrix
The (weighted) adjacency matrix of the graph as a sparse matrix.
dim: int
The dimension of the space into which to embed.
random_state: numpy RandomState or equivalent
A state capable being used as a numpy random state.
tol: float, default chosen by implementation
Stopping tolerance for the numerical algorithm computing the embedding.
maxiter: int, default chosen by implementation
Number of iterations the numerical algorithm will go through at most as it
attempts to compute the embedding.
Returns
-------
embedding: array of shape (n_vertices, dim)
The spectral embedding of the graph. | spectral_layout | python | lmcinnes/umap | umap/spectral.py | https://github.com/lmcinnes/umap/blob/master/umap/spectral.py | BSD-3-Clause |
def tswspectral_layout(
data,
graph,
dim,
random_state,
metric="euclidean",
metric_kwds={},
method=None,
tol=0.0,
maxiter=0,
):
"""Given a graph, compute the spectral embedding of the graph. This is
simply the eigenvectors of the Laplacian of the graph. Here we use the
normalized laplacian and a truncated SVD-based guess of the
eigenvectors to "warm" up the eigensolver. This function should
give results of similar accuracy to the spectral_layout function, but
may converge more quickly for graph Laplacians that cause
spectral_layout to take an excessive amount of time to complete.
Parameters
----------
data: array of shape (n_samples, n_features)
The source data
graph: sparse matrix
The (weighted) adjacency matrix of the graph as a sparse matrix.
dim: int
The dimension of the space into which to embed.
random_state: numpy RandomState or equivalent
A state capable being used as a numpy random state.
metric: string or callable (optional, default 'euclidean')
The metric used to measure distances among the source data points.
Used only if the multiple connected components are found in the
graph.
metric_kwds: dict (optional, default {})
Keyword arguments to be passed to the metric function.
If metric is 'precomputed', 'linkage' keyword can be used to specify
'average', 'complete', or 'single' linkage. Default is 'average'.
Used only if the multiple connected components are found in the
graph.
method: str (optional, default None, values either 'eigsh' or 'lobpcg')
Name of the eigenvalue computation method to use to compute the spectral
embedding. If left to None (or empty string), as by default, the method is
chosen from the number of vectors in play: larger vector collections are
handled with lobpcg, smaller collections with eigsh. Method names correspond
to SciPy routines in scipy.sparse.linalg.
tol: float, default chosen by implementation
Stopping tolerance for the numerical algorithm computing the embedding.
maxiter: int, default chosen by implementation
Number of iterations the numerical algorithm will go through at most as it
attempts to compute the embedding.
Returns
-------
embedding: array of shape (n_vertices, dim)
The spectral embedding of the graph.
"""
return _spectral_layout(
data=data,
graph=graph,
dim=dim,
random_state=random_state,
metric=metric,
metric_kwds=metric_kwds,
init="tsvd",
method=method,
tol=tol,
maxiter=maxiter,
) | Given a graph, compute the spectral embedding of the graph. This is
simply the eigenvectors of the Laplacian of the graph. Here we use the
normalized laplacian and a truncated SVD-based guess of the
eigenvectors to "warm" up the eigensolver. This function should
give results of similar accuracy to the spectral_layout function, but
may converge more quickly for graph Laplacians that cause
spectral_layout to take an excessive amount of time to complete.
Parameters
----------
data: array of shape (n_samples, n_features)
The source data
graph: sparse matrix
The (weighted) adjacency matrix of the graph as a sparse matrix.
dim: int
The dimension of the space into which to embed.
random_state: numpy RandomState or equivalent
A state capable being used as a numpy random state.
metric: string or callable (optional, default 'euclidean')
The metric used to measure distances among the source data points.
Used only if the multiple connected components are found in the
graph.
metric_kwds: dict (optional, default {})
Keyword arguments to be passed to the metric function.
If metric is 'precomputed', 'linkage' keyword can be used to specify
'average', 'complete', or 'single' linkage. Default is 'average'.
Used only if the multiple connected components are found in the
graph.
method: str (optional, default None, values either 'eigsh' or 'lobpcg')
Name of the eigenvalue computation method to use to compute the spectral
embedding. If left to None (or empty string), as by default, the method is
chosen from the number of vectors in play: larger vector collections are
handled with lobpcg, smaller collections with eigsh. Method names correspond
to SciPy routines in scipy.sparse.linalg.
tol: float, default chosen by implementation
Stopping tolerance for the numerical algorithm computing the embedding.
maxiter: int, default chosen by implementation
Number of iterations the numerical algorithm will go through at most as it
attempts to compute the embedding.
Returns
-------
embedding: array of shape (n_vertices, dim)
The spectral embedding of the graph. | tswspectral_layout | python | lmcinnes/umap | umap/spectral.py | https://github.com/lmcinnes/umap/blob/master/umap/spectral.py | BSD-3-Clause |
def _spectral_layout(
data,
graph,
dim,
random_state,
metric="euclidean",
metric_kwds={},
init="random",
method=None,
tol=0.0,
maxiter=0,
):
"""General implementation of the spectral embedding of the graph, derived as
a subset of the eigenvectors of the normalized Laplacian of the graph. The numerical
method for computing the eigendecomposition is chosen through heuristics.
Parameters
----------
data: array of shape (n_samples, n_features)
The source data
graph: sparse matrix
The (weighted) adjacency matrix of the graph as a sparse matrix.
dim: int
The dimension of the space into which to embed.
random_state: numpy RandomState or equivalent
A state capable being used as a numpy random state.
metric: string or callable (optional, default 'euclidean')
The metric used to measure distances among the source data points.
Used only if the multiple connected components are found in the
graph.
metric_kwds: dict (optional, default {})
Keyword arguments to be passed to the metric function.
If metric is 'precomputed', 'linkage' keyword can be used to specify
'average', 'complete', or 'single' linkage. Default is 'average'.
Used only if the multiple connected components are found in the
graph.
init: string, either "random" or "tsvd"
Indicates to initialize the eigensolver. Use "random" (the default) to
use uniformly distributed random initialization; use "tsvd" to warm-start the
eigensolver with singular vectors of the Laplacian associated to the largest
singular values. This latter option also forces usage of the LOBPCG eigensolver;
with the former, ARPACK's solver ``eigsh`` will be used for smaller Laplacians.
method: string -- either "eigsh" or "lobpcg" -- or None
Name of the eigenvalue computation method to use to compute the spectral
embedding. If left to None (or empty string), as by default, the method is
chosen from the number of vectors in play: larger vector collections are
handled with lobpcg, smaller collections with eigsh. Method names correspond
to SciPy routines in scipy.sparse.linalg.
tol: float, default chosen by implementation
Stopping tolerance for the numerical algorithm computing the embedding.
maxiter: int, default chosen by implementation
Number of iterations the numerical algorithm will go through at most as it
attempts to compute the embedding.
Returns
-------
embedding: array of shape (n_vertices, dim)
The spectral embedding of the graph.
"""
n_samples = graph.shape[0]
n_components, labels = scipy.sparse.csgraph.connected_components(graph)
if n_components > 1:
return multi_component_layout(
data,
graph,
n_components,
labels,
dim,
random_state,
metric=metric,
metric_kwds=metric_kwds,
)
sqrt_deg = np.sqrt(np.asarray(graph.sum(axis=0)).squeeze())
# standard Laplacian
# D = scipy.sparse.spdiags(diag_data, 0, graph.shape[0], graph.shape[0])
# L = D - graph
# Normalized Laplacian
I = scipy.sparse.identity(graph.shape[0], dtype=np.float64)
D = scipy.sparse.spdiags(1.0 / sqrt_deg, 0, graph.shape[0], graph.shape[0])
L = I - D * graph * D
if not scipy.sparse.issparse(L):
L = np.asarray(L)
k = dim + 1
num_lanczos_vectors = max(2 * k + 1, int(np.sqrt(graph.shape[0])))
gen = (
random_state
if isinstance(random_state, (np.random.Generator, np.random.RandomState))
else np.random.default_rng(seed=random_state)
)
if not method:
method = "eigsh" if L.shape[0] < 2000000 else "lobpcg"
try:
if init == "random":
X = gen.normal(size=(L.shape[0], k))
elif init == "tsvd":
X = TruncatedSVD(
n_components=k,
random_state=random_state,
# algorithm="arpack"
).fit_transform(L)
else:
raise ValueError(
"The init parameter must be either 'random' or 'tsvd': "
f"{init} is invalid."
)
# For such a normalized Laplacian, the first eigenvector is always
# proportional to sqrt(degrees). We thus replace the first t-SVD guess
# with the exact value.
X[:, 0] = sqrt_deg / np.linalg.norm(sqrt_deg)
if method == "eigsh":
eigenvalues, eigenvectors = scipy.sparse.linalg.eigsh(
L,
k,
which="SM",
ncv=num_lanczos_vectors,
tol=tol or 1e-4,
v0=np.ones(L.shape[0]),
maxiter=maxiter or graph.shape[0] * 5,
)
elif method == "lobpcg":
with warnings.catch_warnings():
warnings.filterwarnings(
category=UserWarning,
message=r"(?ms).*not reaching the requested tolerance",
action="error",
)
eigenvalues, eigenvectors = scipy.sparse.linalg.lobpcg(
L,
np.asarray(X),
largest=False,
tol=tol or 1e-4,
maxiter=maxiter or 5 * graph.shape[0],
)
else:
raise ValueError("Method should either be None, 'eigsh' or 'lobpcg'")
order = np.argsort(eigenvalues)[1:k]
return eigenvectors[:, order]
except (scipy.sparse.linalg.ArpackError, UserWarning):
warn(
"Spectral initialisation failed! The eigenvector solver\n"
"failed. This is likely due to too small an eigengap. Consider\n"
"adding some noise or jitter to your data.\n\n"
"Falling back to random initialisation!"
)
return gen.uniform(low=-10.0, high=10.0, size=(graph.shape[0], dim)) | General implementation of the spectral embedding of the graph, derived as
a subset of the eigenvectors of the normalized Laplacian of the graph. The numerical
method for computing the eigendecomposition is chosen through heuristics.
Parameters
----------
data: array of shape (n_samples, n_features)
The source data
graph: sparse matrix
The (weighted) adjacency matrix of the graph as a sparse matrix.
dim: int
The dimension of the space into which to embed.
random_state: numpy RandomState or equivalent
A state capable being used as a numpy random state.
metric: string or callable (optional, default 'euclidean')
The metric used to measure distances among the source data points.
Used only if the multiple connected components are found in the
graph.
metric_kwds: dict (optional, default {})
Keyword arguments to be passed to the metric function.
If metric is 'precomputed', 'linkage' keyword can be used to specify
'average', 'complete', or 'single' linkage. Default is 'average'.
Used only if the multiple connected components are found in the
graph.
init: string, either "random" or "tsvd"
Indicates to initialize the eigensolver. Use "random" (the default) to
use uniformly distributed random initialization; use "tsvd" to warm-start the
eigensolver with singular vectors of the Laplacian associated to the largest
singular values. This latter option also forces usage of the LOBPCG eigensolver;
with the former, ARPACK's solver ``eigsh`` will be used for smaller Laplacians.
method: string -- either "eigsh" or "lobpcg" -- or None
Name of the eigenvalue computation method to use to compute the spectral
embedding. If left to None (or empty string), as by default, the method is
chosen from the number of vectors in play: larger vector collections are
handled with lobpcg, smaller collections with eigsh. Method names correspond
to SciPy routines in scipy.sparse.linalg.
tol: float, default chosen by implementation
Stopping tolerance for the numerical algorithm computing the embedding.
maxiter: int, default chosen by implementation
Number of iterations the numerical algorithm will go through at most as it
attempts to compute the embedding.
Returns
-------
embedding: array of shape (n_vertices, dim)
The spectral embedding of the graph. | _spectral_layout | python | lmcinnes/umap | umap/spectral.py | https://github.com/lmcinnes/umap/blob/master/umap/spectral.py | BSD-3-Clause |
def _nhood_compare(indices_left, indices_right):
"""Compute Jaccard index of two neighborhoods"""
result = np.empty(indices_left.shape[0])
for i in range(indices_left.shape[0]):
with numba.objmode(intersection_size="intp"):
intersection_size = np.intersect1d(
indices_left[i], indices_right[i], assume_unique=True
).shape[0]
union_size = np.unique(np.hstack((indices_left[i], indices_right[i]))).shape[0]
result[i] = float(intersection_size) / float(union_size)
return result | Compute Jaccard index of two neighborhoods | _nhood_compare | python | lmcinnes/umap | umap/plot.py | https://github.com/lmcinnes/umap/blob/master/umap/plot.py | BSD-3-Clause |
def _get_extent(points):
"""Compute bounds on a space with appropriate padding"""
min_x = np.nanmin(points[:, 0])
max_x = np.nanmax(points[:, 0])
min_y = np.nanmin(points[:, 1])
max_y = np.nanmax(points[:, 1])
extent = (
np.round(min_x - 0.05 * (max_x - min_x)),
np.round(max_x + 0.05 * (max_x - min_x)),
np.round(min_y - 0.05 * (max_y - min_y)),
np.round(max_y + 0.05 * (max_y - min_y)),
)
return extent | Compute bounds on a space with appropriate padding | _get_extent | python | lmcinnes/umap | umap/plot.py | https://github.com/lmcinnes/umap/blob/master/umap/plot.py | BSD-3-Clause |
def _datashade_points(
points,
ax=None,
labels=None,
values=None,
cmap="Blues",
color_key=None,
color_key_cmap="Spectral",
background="white",
width=800,
height=800,
show_legend=True,
alpha=255,
):
"""Use datashader to plot points"""
extent = _get_extent(points)
canvas = ds.Canvas(
plot_width=width,
plot_height=height,
x_range=(extent[0], extent[1]),
y_range=(extent[2], extent[3]),
)
data = pd.DataFrame(points, columns=("x", "y"))
legend_elements = None
# Color by labels
if labels is not None:
if labels.shape[0] != points.shape[0]:
raise ValueError(
"Labels must have a label for "
"each sample (size mismatch: {} {})".format(
labels.shape[0], points.shape[0]
)
)
data["label"] = pd.Categorical(labels)
aggregation = canvas.points(data, "x", "y", agg=ds.count_cat("label"))
if color_key is None and color_key_cmap is None:
result = tf.shade(aggregation, how="eq_hist", alpha=alpha)
elif color_key is None:
unique_labels = np.unique(labels)
num_labels = unique_labels.shape[0]
color_key = _to_hex(
plt.get_cmap(color_key_cmap)(np.linspace(0, 1, num_labels))
)
legend_elements = [
Patch(facecolor=color_key[i], label=k)
for i, k in enumerate(unique_labels)
]
result = tf.shade(
aggregation, color_key=color_key, how="eq_hist", alpha=alpha
)
else:
legend_elements = [
Patch(facecolor=color_key[k], label=k) for k in color_key.keys()
]
result = tf.shade(
aggregation, color_key=color_key, how="eq_hist", alpha=alpha
)
# Color by values
elif values is not None:
if values.shape[0] != points.shape[0]:
raise ValueError(
"Values must have a value for "
"each sample (size mismatch: {} {})".format(
values.shape[0], points.shape[0]
)
)
unique_values = np.unique(values)
if unique_values.shape[0] >= 256:
min_val, max_val = np.min(values), np.max(values)
bin_size = (max_val - min_val) / 255.0
data["val_cat"] = pd.Categorical(
np.round((values - min_val) / bin_size).astype(np.int16)
)
aggregation = canvas.points(data, "x", "y", agg=ds.count_cat("val_cat"))
color_key = _to_hex(plt.get_cmap(cmap)(np.linspace(0, 1, 256)))
result = tf.shade(
aggregation, color_key=color_key, how="eq_hist", alpha=alpha
)
else:
data["val_cat"] = pd.Categorical(values)
aggregation = canvas.points(data, "x", "y", agg=ds.count_cat("val_cat"))
color_key_cols = _to_hex(
plt.get_cmap(cmap)(np.linspace(0, 1, unique_values.shape[0]))
)
color_key = dict(zip(unique_values, color_key_cols))
result = tf.shade(
aggregation, color_key=color_key, how="eq_hist", alpha=alpha
)
# Color by density (default datashader option)
else:
aggregation = canvas.points(data, "x", "y", agg=ds.count())
result = tf.shade(aggregation, cmap=plt.get_cmap(cmap), alpha=alpha)
if background is not None:
result = tf.set_background(result, background)
if ax is not None:
_embed_datashader_in_an_axis(result, ax)
if show_legend and legend_elements is not None:
ax.legend(handles=legend_elements)
return ax
else:
return result | Use datashader to plot points | _datashade_points | python | lmcinnes/umap | umap/plot.py | https://github.com/lmcinnes/umap/blob/master/umap/plot.py | BSD-3-Clause |
def _matplotlib_points(
points,
ax=None,
labels=None,
values=None,
cmap="Blues",
color_key=None,
color_key_cmap="Spectral",
background="white",
width=800,
height=800,
show_legend=True,
alpha=None,
):
"""Use matplotlib to plot points"""
point_size = 100.0 / np.sqrt(points.shape[0])
legend_elements = None
if ax is None:
dpi = plt.rcParams["figure.dpi"]
fig = plt.figure(figsize=(width / dpi, height / dpi))
ax = fig.add_subplot(111)
ax.set_facecolor(background)
# Color by labels
if labels is not None:
if labels.shape[0] != points.shape[0]:
raise ValueError(
"Labels must have a label for "
"each sample (size mismatch: {} {})".format(
labels.shape[0], points.shape[0]
)
)
if color_key is None:
unique_labels = np.unique(labels)
num_labels = unique_labels.shape[0]
color_key = plt.get_cmap(color_key_cmap)(np.linspace(0, 1, num_labels))
legend_elements = [
Patch(facecolor=color_key[i], label=unique_labels[i])
for i, k in enumerate(unique_labels)
]
if isinstance(color_key, dict):
colors = pd.Series(labels).map(color_key)
unique_labels = np.unique(labels)
legend_elements = [
Patch(facecolor=color_key[k], label=k) for k in unique_labels
]
else:
unique_labels = np.unique(labels)
if len(color_key) < unique_labels.shape[0]:
raise ValueError(
"Color key must have enough colors for the number of labels"
)
new_color_key = {
k: matplotlib.colors.to_hex(color_key[i])
for i, k in enumerate(unique_labels)
}
legend_elements = [
Patch(facecolor=color_key[i], label=k)
for i, k in enumerate(unique_labels)
]
colors = pd.Series(labels).map(new_color_key)
ax.scatter(points[:, 0], points[:, 1], s=point_size, c=colors, alpha=alpha)
# Color by values
elif values is not None:
if values.shape[0] != points.shape[0]:
raise ValueError(
"Values must have a value for "
"each sample (size mismatch: {} {})".format(
values.shape[0], points.shape[0]
)
)
ax.scatter(
points[:, 0], points[:, 1], s=point_size, c=values, cmap=cmap, alpha=alpha
)
# No color (just pick the midpoint of the cmap)
else:
color = plt.get_cmap(cmap)(0.5)
ax.scatter(points[:, 0], points[:, 1], s=point_size, c=color)
if show_legend and legend_elements is not None:
ax.legend(handles=legend_elements)
return ax | Use matplotlib to plot points | _matplotlib_points | python | lmcinnes/umap | umap/plot.py | https://github.com/lmcinnes/umap/blob/master/umap/plot.py | BSD-3-Clause |
def show(plot_to_show):
"""Display a plot, either interactive or static.
Parameters
----------
plot_to_show: Output of a plotting command (matplotlib axis or bokeh figure)
The plot to show
Returns
-------
None
"""
if isinstance(plot_to_show, plt.Axes):
show_static()
elif isinstance(plot_to_show, bpl.figure):
show_interactive(plot_to_show)
elif isinstance(plot_to_show, hv.core.spaces.DynamicMap):
show_interactive(hv.render(plot_to_show), backend="bokeh")
else:
raise ValueError(
"The type of ``plot_to_show`` was not valid, or not understood."
) | Display a plot, either interactive or static.
Parameters
----------
plot_to_show: Output of a plotting command (matplotlib axis or bokeh figure)
The plot to show
Returns
-------
None | show | python | lmcinnes/umap | umap/plot.py | https://github.com/lmcinnes/umap/blob/master/umap/plot.py | BSD-3-Clause |
def points(
umap_object,
points=None,
labels=None,
values=None,
theme=None,
cmap="Blues",
color_key=None,
color_key_cmap="Spectral",
background="white",
width=800,
height=800,
show_legend=True,
subset_points=None,
ax=None,
alpha=None,
):
"""Plot an embedding as points. Currently this only works
for 2D embeddings. While there are many optional parameters
to further control and tailor the plotting, you need only
pass in the trained/fit umap model to get results. This plot
utility will attempt to do the hard work of avoiding
over-plotting issues, and make it easy to automatically
colour points by a categorical labelling or numeric values.
This method is intended to be used within a Jupyter
notebook with ``%matplotlib inline``.
Parameters
----------
umap_object: trained UMAP object
A trained UMAP object that has a 2D embedding.
points: array, shape (n_samples, dim) (optional, default None)
An array of points to be plotted. Usually this is None
and so the original embedding points of the umap_object
are used. However points can be passed explicitly instead
which is useful for points manually transformed.
labels: array, shape (n_samples,) (optional, default None)
An array of labels (assumed integer or categorical),
one for each data sample.
This will be used for coloring the points in
the plot according to their label. Note that
this option is mutually exclusive to the ``values``
option.
values: array, shape (n_samples,) (optional, default None)
An array of values (assumed float or continuous),
one for each sample.
This will be used for coloring the points in
the plot according to a colorscale associated
to the total range of values. Note that this
option is mutually exclusive to the ``labels``
option.
theme: string (optional, default None)
A color theme to use for plotting. A small set of
predefined themes are provided which have relatively
good aesthetics. Available themes are:
* 'blue'
* 'red'
* 'green'
* 'inferno'
* 'fire'
* 'viridis'
* 'darkblue'
* 'darkred'
* 'darkgreen'
cmap: string (optional, default 'Blues')
The name of a matplotlib colormap to use for coloring
or shading points. If no labels or values are passed
this will be used for shading points according to
density (largely only of relevance for very large
datasets). If values are passed this will be used for
shading according the value. Note that if theme
is passed then this value will be overridden by the
corresponding option of the theme.
color_key: dict or array, shape (n_categories) (optional, default None)
A way to assign colors to categoricals. This can either be
an explicit dict mapping labels to colors (as strings of form
'#RRGGBB'), or an array like object providing one color for
each distinct category being provided in ``labels``. Either
way this mapping will be used to color points according to
the label. Note that if theme
is passed then this value will be overridden by the
corresponding option of the theme.
color_key_cmap: string (optional, default 'Spectral')
The name of a matplotlib colormap to use for categorical coloring.
If an explicit ``color_key`` is not given a color mapping for
categories can be generated from the label list and selecting
a matching list of colors from the given colormap. Note
that if theme
is passed then this value will be overridden by the
corresponding option of the theme.
background: string (optional, default 'white)
The color of the background. Usually this will be either
'white' or 'black', but any color name will work. Ideally
one wants to match this appropriately to the colors being
used for points etc. This is one of the things that themes
handle for you. Note that if theme
is passed then this value will be overridden by the
corresponding option of the theme.
width: int (optional, default 800)
The desired width of the plot in pixels.
height: int (optional, default 800)
The desired height of the plot in pixels
show_legend: bool (optional, default True)
Whether to display a legend of the labels
subset_points: array, shape (n_samples,) (optional, default None)
A way to select a subset of points based on an array of boolean
values.
ax: matplotlib axis (optional, default None)
The matplotlib axis to draw the plot to, or if None, which is
the default, a new axis will be created and returned.
alpha: float (optional, default: None)
The alpha blending value, between 0 (transparent) and 1 (opaque).
Returns
-------
result: matplotlib axis
The result is a matplotlib axis with the relevant plot displayed.
If you are using a notebooks and have ``%matplotlib inline`` set
then this will simply display inline.
"""
# if not hasattr(umap_object, "embedding_"):
# raise ValueError(
# "UMAP object must perform fit on data before it can be visualized"
# )
if theme is not None:
cmap = _themes[theme]["cmap"]
color_key_cmap = _themes[theme]["color_key_cmap"]
background = _themes[theme]["background"]
if labels is not None and values is not None:
raise ValueError(
"Conflicting options; only one of labels or values should be set"
)
if alpha is not None:
if not 0.0 <= alpha <= 1.0:
raise ValueError("Alpha must be between 0 and 1 inclusive")
if points is None:
points = _get_embedding(umap_object)
if subset_points is not None:
if len(subset_points) != points.shape[0]:
raise ValueError(
"Size of subset points ({}) does not match number of input points ({})".format(
len(subset_points), points.shape[0]
)
)
points = points[subset_points]
if labels is not None:
labels = labels[subset_points]
if values is not None:
values = values[subset_points]
if points.shape[1] != 2:
raise ValueError("Plotting is currently only implemented for 2D embeddings")
font_color = _select_font_color(background)
if ax is None:
dpi = plt.rcParams["figure.dpi"]
fig = plt.figure(figsize=(width / dpi, height / dpi))
ax = fig.add_subplot(111)
if points.shape[0] <= width * height // 10:
ax = _matplotlib_points(
points,
ax,
labels,
values,
cmap,
color_key,
color_key_cmap,
background,
width,
height,
show_legend,
alpha,
)
else:
# Datashader uses 0-255 as the range for alpha, with 255 as the default
if alpha is not None:
alpha = alpha * 255
else:
alpha = 255
ax = _datashade_points(
points,
ax,
labels,
values,
cmap,
color_key,
color_key_cmap,
background,
width,
height,
show_legend,
alpha,
)
ax.set(xticks=[], yticks=[])
if _get_metric(umap_object) != "euclidean":
ax.text(
0.99,
0.01,
"UMAP: metric={}, n_neighbors={}, min_dist={}".format(
_get_metric(umap_object), umap_object.n_neighbors, umap_object.min_dist
),
transform=ax.transAxes,
horizontalalignment="right",
color=font_color,
)
else:
ax.text(
0.99,
0.01,
"UMAP: n_neighbors={}, min_dist={}".format(
umap_object.n_neighbors, umap_object.min_dist
),
transform=ax.transAxes,
horizontalalignment="right",
color=font_color,
)
return ax | Plot an embedding as points. Currently this only works
for 2D embeddings. While there are many optional parameters
to further control and tailor the plotting, you need only
pass in the trained/fit umap model to get results. This plot
utility will attempt to do the hard work of avoiding
over-plotting issues, and make it easy to automatically
colour points by a categorical labelling or numeric values.
This method is intended to be used within a Jupyter
notebook with ``%matplotlib inline``.
Parameters
----------
umap_object: trained UMAP object
A trained UMAP object that has a 2D embedding.
points: array, shape (n_samples, dim) (optional, default None)
An array of points to be plotted. Usually this is None
and so the original embedding points of the umap_object
are used. However points can be passed explicitly instead
which is useful for points manually transformed.
labels: array, shape (n_samples,) (optional, default None)
An array of labels (assumed integer or categorical),
one for each data sample.
This will be used for coloring the points in
the plot according to their label. Note that
this option is mutually exclusive to the ``values``
option.
values: array, shape (n_samples,) (optional, default None)
An array of values (assumed float or continuous),
one for each sample.
This will be used for coloring the points in
the plot according to a colorscale associated
to the total range of values. Note that this
option is mutually exclusive to the ``labels``
option.
theme: string (optional, default None)
A color theme to use for plotting. A small set of
predefined themes are provided which have relatively
good aesthetics. Available themes are:
* 'blue'
* 'red'
* 'green'
* 'inferno'
* 'fire'
* 'viridis'
* 'darkblue'
* 'darkred'
* 'darkgreen'
cmap: string (optional, default 'Blues')
The name of a matplotlib colormap to use for coloring
or shading points. If no labels or values are passed
this will be used for shading points according to
density (largely only of relevance for very large
datasets). If values are passed this will be used for
shading according the value. Note that if theme
is passed then this value will be overridden by the
corresponding option of the theme.
color_key: dict or array, shape (n_categories) (optional, default None)
A way to assign colors to categoricals. This can either be
an explicit dict mapping labels to colors (as strings of form
'#RRGGBB'), or an array like object providing one color for
each distinct category being provided in ``labels``. Either
way this mapping will be used to color points according to
the label. Note that if theme
is passed then this value will be overridden by the
corresponding option of the theme.
color_key_cmap: string (optional, default 'Spectral')
The name of a matplotlib colormap to use for categorical coloring.
If an explicit ``color_key`` is not given a color mapping for
categories can be generated from the label list and selecting
a matching list of colors from the given colormap. Note
that if theme
is passed then this value will be overridden by the
corresponding option of the theme.
background: string (optional, default 'white)
The color of the background. Usually this will be either
'white' or 'black', but any color name will work. Ideally
one wants to match this appropriately to the colors being
used for points etc. This is one of the things that themes
handle for you. Note that if theme
is passed then this value will be overridden by the
corresponding option of the theme.
width: int (optional, default 800)
The desired width of the plot in pixels.
height: int (optional, default 800)
The desired height of the plot in pixels
show_legend: bool (optional, default True)
Whether to display a legend of the labels
subset_points: array, shape (n_samples,) (optional, default None)
A way to select a subset of points based on an array of boolean
values.
ax: matplotlib axis (optional, default None)
The matplotlib axis to draw the plot to, or if None, which is
the default, a new axis will be created and returned.
alpha: float (optional, default: None)
The alpha blending value, between 0 (transparent) and 1 (opaque).
Returns
-------
result: matplotlib axis
The result is a matplotlib axis with the relevant plot displayed.
If you are using a notebooks and have ``%matplotlib inline`` set
then this will simply display inline. | points | python | lmcinnes/umap | umap/plot.py | https://github.com/lmcinnes/umap/blob/master/umap/plot.py | BSD-3-Clause |
def connectivity(
umap_object,
edge_bundling=None,
edge_cmap="gray_r",
show_points=False,
labels=None,
values=None,
theme=None,
cmap="Blues",
color_key=None,
color_key_cmap="Spectral",
background="white",
width=800,
height=800,
):
"""Plot connectivity relationships of the underlying UMAP
simplicial set data structure. Internally UMAP will make
use of what can be viewed as a weighted graph. This graph
can be plotted using the layout provided by UMAP as a
potential diagnostic view of the embedding. Currently this only works
for 2D embeddings. While there are many optional parameters
to further control and tailor the plotting, you need only
pass in the trained/fit umap model to get results. This plot
utility will attempt to do the hard work of avoiding
over-plotting issues and provide options for plotting the
points as well as using edge bundling for graph visualization.
Parameters
----------
umap_object: trained UMAP object
A trained UMAP object that has a 2D embedding.
edge_bundling: string or None (optional, default None)
The edge bundling method to use. Currently supported
are None or 'hammer'. See the datashader docs
on graph visualization for more details.
edge_cmap: string (default 'gray_r')
The name of a matplotlib colormap to use for shading/
coloring the edges of the connectivity graph. Note that
the ``theme``, if specified, will override this.
show_points: bool (optional False)
Whether to display the points over top of the edge
connectivity. Further options allow for coloring/
shading the points accordingly.
labels: array, shape (n_samples,) (optional, default None)
An array of labels (assumed integer or categorical),
one for each data sample.
This will be used for coloring the points in
the plot according to their label. Note that
this option is mutually exclusive to the ``values``
option.
values: array, shape (n_samples,) (optional, default None)
An array of values (assumed float or continuous),
one for each sample.
This will be used for coloring the points in
the plot according to a colorscale associated
to the total range of values. Note that this
option is mutually exclusive to the ``labels``
option.
theme: string (optional, default None)
A color theme to use for plotting. A small set of
predefined themes are provided which have relatively
good aesthetics. Available themes are:
* 'blue'
* 'red'
* 'green'
* 'inferno'
* 'fire'
* 'viridis'
* 'darkblue'
* 'darkred'
* 'darkgreen'
cmap: string (optional, default 'Blues')
The name of a matplotlib colormap to use for coloring
or shading points. If no labels or values are passed
this will be used for shading points according to
density (largely only of relevance for very large
datasets). If values are passed this will be used for
shading according the value. Note that if theme
is passed then this value will be overridden by the
corresponding option of the theme.
color_key: dict or array, shape (n_categories) (optional, default None)
A way to assign colors to categoricals. This can either be
an explicit dict mapping labels to colors (as strings of form
'#RRGGBB'), or an array like object providing one color for
each distinct category being provided in ``labels``. Either
way this mapping will be used to color points according to
the label. Note that if theme
is passed then this value will be overridden by the
corresponding option of the theme.
color_key_cmap: string (optional, default 'Spectral')
The name of a matplotlib colormap to use for categorical coloring.
If an explicit ``color_key`` is not given a color mapping for
categories can be generated from the label list and selecting
a matching list of colors from the given colormap. Note
that if theme
is passed then this value will be overridden by the
corresponding option of the theme.
background: string (optional, default 'white)
The color of the background. Usually this will be either
'white' or 'black', but any color name will work. Ideally
one wants to match this appropriately to the colors being
used for points etc. This is one of the things that themes
handle for you. Note that if theme
is passed then this value will be overridden by the
corresponding option of the theme.
width: int (optional, default 800)
The desired width of the plot in pixels.
height: int (optional, default 800)
The desired height of the plot in pixels
Returns
-------
result: matplotlib axis
The result is a matplotlib axis with the relevant plot displayed.
If you are using a notebook and have ``%matplotlib inline`` set
then this will simply display inline.
"""
if theme is not None:
cmap = _themes[theme]["cmap"]
color_key_cmap = _themes[theme]["color_key_cmap"]
edge_cmap = _themes[theme]["edge_cmap"]
background = _themes[theme]["background"]
points = _get_embedding(umap_object)
point_df = pd.DataFrame(points, columns=("x", "y"))
point_size = 100.0 / np.sqrt(points.shape[0])
if point_size > 1:
px_size = int(np.round(point_size))
else:
px_size = 1
if show_points:
edge_how = "log"
else:
edge_how = "eq_hist"
coo_graph = umap_object.graph_.tocoo()
edge_df = pd.DataFrame(
np.vstack([coo_graph.row, coo_graph.col, coo_graph.data]).T,
columns=("source", "target", "weight"),
)
edge_df["source"] = edge_df.source.astype(np.int32)
edge_df["target"] = edge_df.target.astype(np.int32)
extent = _get_extent(points)
canvas = ds.Canvas(
plot_width=width,
plot_height=height,
x_range=(extent[0], extent[1]),
y_range=(extent[2], extent[3]),
)
if edge_bundling is None:
edges = bd.directly_connect_edges(point_df, edge_df, weight="weight")
elif edge_bundling == "hammer":
warn(
"Hammer edge bundling is expensive for large graphs!\n"
"This may take a long time to compute!"
)
edges = bd.hammer_bundle(point_df, edge_df, weight="weight")
else:
raise ValueError("{} is not a recognised bundling method".format(edge_bundling))
edge_img = tf.shade(
canvas.line(edges, "x", "y", agg=ds.sum("weight")),
cmap=plt.get_cmap(edge_cmap),
how=edge_how,
)
edge_img = tf.set_background(edge_img, background)
if show_points:
point_img = _datashade_points(
points,
None,
labels,
values,
cmap,
color_key,
color_key_cmap,
None,
width,
height,
False,
)
if px_size > 1:
point_img = tf.dynspread(point_img, threshold=0.5, max_px=px_size)
result = tf.stack(edge_img, point_img, how="over")
else:
result = edge_img
font_color = _select_font_color(background)
dpi = plt.rcParams["figure.dpi"]
fig = plt.figure(figsize=(width / dpi, height / dpi))
ax = fig.add_subplot(111)
_embed_datashader_in_an_axis(result, ax)
ax.set(xticks=[], yticks=[])
ax.text(
0.99,
0.01,
"UMAP: n_neighbors={}, min_dist={}".format(
umap_object.n_neighbors, umap_object.min_dist
),
transform=ax.transAxes,
horizontalalignment="right",
color=font_color,
)
return ax | Plot connectivity relationships of the underlying UMAP
simplicial set data structure. Internally UMAP will make
use of what can be viewed as a weighted graph. This graph
can be plotted using the layout provided by UMAP as a
potential diagnostic view of the embedding. Currently this only works
for 2D embeddings. While there are many optional parameters
to further control and tailor the plotting, you need only
pass in the trained/fit umap model to get results. This plot
utility will attempt to do the hard work of avoiding
over-plotting issues and provide options for plotting the
points as well as using edge bundling for graph visualization.
Parameters
----------
umap_object: trained UMAP object
A trained UMAP object that has a 2D embedding.
edge_bundling: string or None (optional, default None)
The edge bundling method to use. Currently supported
are None or 'hammer'. See the datashader docs
on graph visualization for more details.
edge_cmap: string (default 'gray_r')
The name of a matplotlib colormap to use for shading/
coloring the edges of the connectivity graph. Note that
the ``theme``, if specified, will override this.
show_points: bool (optional False)
Whether to display the points over top of the edge
connectivity. Further options allow for coloring/
shading the points accordingly.
labels: array, shape (n_samples,) (optional, default None)
An array of labels (assumed integer or categorical),
one for each data sample.
This will be used for coloring the points in
the plot according to their label. Note that
this option is mutually exclusive to the ``values``
option.
values: array, shape (n_samples,) (optional, default None)
An array of values (assumed float or continuous),
one for each sample.
This will be used for coloring the points in
the plot according to a colorscale associated
to the total range of values. Note that this
option is mutually exclusive to the ``labels``
option.
theme: string (optional, default None)
A color theme to use for plotting. A small set of
predefined themes are provided which have relatively
good aesthetics. Available themes are:
* 'blue'
* 'red'
* 'green'
* 'inferno'
* 'fire'
* 'viridis'
* 'darkblue'
* 'darkred'
* 'darkgreen'
cmap: string (optional, default 'Blues')
The name of a matplotlib colormap to use for coloring
or shading points. If no labels or values are passed
this will be used for shading points according to
density (largely only of relevance for very large
datasets). If values are passed this will be used for
shading according the value. Note that if theme
is passed then this value will be overridden by the
corresponding option of the theme.
color_key: dict or array, shape (n_categories) (optional, default None)
A way to assign colors to categoricals. This can either be
an explicit dict mapping labels to colors (as strings of form
'#RRGGBB'), or an array like object providing one color for
each distinct category being provided in ``labels``. Either
way this mapping will be used to color points according to
the label. Note that if theme
is passed then this value will be overridden by the
corresponding option of the theme.
color_key_cmap: string (optional, default 'Spectral')
The name of a matplotlib colormap to use for categorical coloring.
If an explicit ``color_key`` is not given a color mapping for
categories can be generated from the label list and selecting
a matching list of colors from the given colormap. Note
that if theme
is passed then this value will be overridden by the
corresponding option of the theme.
background: string (optional, default 'white)
The color of the background. Usually this will be either
'white' or 'black', but any color name will work. Ideally
one wants to match this appropriately to the colors being
used for points etc. This is one of the things that themes
handle for you. Note that if theme
is passed then this value will be overridden by the
corresponding option of the theme.
width: int (optional, default 800)
The desired width of the plot in pixels.
height: int (optional, default 800)
The desired height of the plot in pixels
Returns
-------
result: matplotlib axis
The result is a matplotlib axis with the relevant plot displayed.
If you are using a notebook and have ``%matplotlib inline`` set
then this will simply display inline. | connectivity | python | lmcinnes/umap | umap/plot.py | https://github.com/lmcinnes/umap/blob/master/umap/plot.py | BSD-3-Clause |
def diagnostic(
umap_object,
diagnostic_type="pca",
nhood_size=15,
local_variance_threshold=0.8,
ax=None,
cmap="viridis",
point_size=None,
background="white",
width=800,
height=800,
):
"""Provide a diagnostic plot or plots for a UMAP embedding.
There are a number of plots that can be helpful for diagnostic
purposes in understanding your embedding. Currently these are
restricted to methods of coloring a scatterplot of the
embedding to show more about how the embedding is representing
the data. The first class of such plots uses a linear method
that preserves global structure well to embed the data into
three dimensions, and then interprets such coordinates as a
color space -- coloring the points by their location in the
linear global structure preserving embedding. In such plots
one should look for discontinuities of colour, and consider
overall global gradients of color. The diagnostic types here
are ``'pca'``, ``'ica'``, and ``'vq'`` (vector quantization).
The second class consider the local neighbor structure. One
can either look at how well the neighbor structure is
preserved, or how the estimated local dimension of the data
varies. Both of these are available, although the local
dimension estimation is the preferred option. You can
access these are diagnostic types ``'local_dim'`` and
``'neighborhood'``.
Finally the diagnostic type ``'all'`` will provide a
grid of diagnostic plots.
Parameters
----------
umap_object: trained UMAP object
A trained UMAP object that has a 2D embedding.
diagnostic_type: str (optional, default 'pca')
The type of diagnostic plot to show. The options are
* 'pca'
* 'ica'
* 'vq'
* 'local_dim'
* 'neighborhood'
* 'all'
nhood_size: int (optional, default 15)
The size of neighborhood to compare for local
neighborhood preservation estimates.
local_variance_threshold: float (optional, default 0.8)
To estimate the local dimension we consider a PCA of
the local neighborhood and estimate the dimension
as that which provides ``local_variance_threshold``
or more of the ``variance_explained_ratio``.
ax: matplotlib axis (optional, default None)
A matplotlib axis to plot to, or, if None, a new
axis will be created and returned.
cmap: str (optional, default 'viridis')
The name of a matplotlib colormap to use for coloring
points if the ``'local_dim'`` or ``'neighborhood'``
option are selected.
point_size: int (optional, None)
If provided this will fix the point size for the
plot(s). If None then a suitable point size will
be estimated from the data.
Returns
-------
result: matplotlib axis
The result is a matplotlib axis with the relevant plot displayed.
If you are using a notebook and have ``%matplotlib inline`` set
then this will simply display inline.
"""
points = _get_embedding(umap_object)
if points.shape[1] != 2:
raise ValueError("Plotting is currently only implemented for 2D embeddings")
if point_size is None:
point_size = 100.0 / np.sqrt(points.shape[0])
if ax is None:
dpi = plt.rcParams["figure.dpi"]
if diagnostic_type in ("local_dim", "neighborhood"):
width *= 1.1
font_color = _select_font_color(background)
if ax is None and diagnostic_type != "all":
fig = plt.figure()
ax = fig.add_subplot(111)
if diagnostic_type == "pca":
color_proj = sklearn.decomposition.PCA(n_components=3).fit_transform(
umap_object._raw_data
)
color_proj -= np.min(color_proj)
color_proj /= np.max(color_proj, axis=0)
ax.scatter(points[:, 0], points[:, 1], s=point_size, c=color_proj, alpha=0.66)
ax.set_title("Colored by RGB coords of PCA embedding")
ax.text(
0.99,
0.01,
"UMAP: n_neighbors={}, min_dist={}".format(
umap_object.n_neighbors, umap_object.min_dist
),
transform=ax.transAxes,
horizontalalignment="right",
color=font_color,
)
ax.set(xticks=[], yticks=[])
elif diagnostic_type == "ica":
color_proj = sklearn.decomposition.FastICA(n_components=3).fit_transform(
umap_object._raw_data
)
color_proj -= np.min(color_proj)
color_proj /= np.max(color_proj, axis=0)
ax.scatter(points[:, 0], points[:, 1], s=point_size, c=color_proj, alpha=0.66)
ax.set_title("Colored by RGB coords of FastICA embedding")
ax.text(
0.99,
0.01,
"UMAP: n_neighbors={}, min_dist={}".format(
umap_object.n_neighbors, umap_object.min_dist
),
transform=ax.transAxes,
horizontalalignment="right",
color=font_color,
)
ax.set(xticks=[], yticks=[])
elif diagnostic_type == "vq":
color_projector = sklearn.cluster.KMeans(n_clusters=3).fit(
umap_object._raw_data
)
color_proj = sklearn.metrics.pairwise_distances(
umap_object._raw_data, color_projector.cluster_centers_
)
color_proj -= np.min(color_proj)
color_proj /= np.max(color_proj, axis=0)
ax.scatter(points[:, 0], points[:, 1], s=point_size, c=color_proj, alpha=0.66)
ax.set_title("Colored by RGB coords of Vector Quantization")
ax.text(
0.99,
0.01,
"UMAP: n_neighbors={}, min_dist={}".format(
umap_object.n_neighbors, umap_object.min_dist
),
transform=ax.transAxes,
horizontalalignment="right",
color=font_color,
)
ax.set(xticks=[], yticks=[])
elif diagnostic_type == "neighborhood":
highd_indices, highd_dists = _nhood_search(umap_object, nhood_size)
tree = sklearn.neighbors.KDTree(points)
lowd_dists, lowd_indices = tree.query(points, k=nhood_size)
accuracy = _nhood_compare(
highd_indices.astype(np.int32), lowd_indices.astype(np.int32)
)
vmin = np.percentile(accuracy, 5)
vmax = np.percentile(accuracy, 95)
ax.scatter(
points[:, 0],
points[:, 1],
s=point_size,
c=accuracy,
cmap=cmap,
vmin=vmin,
vmax=vmax,
)
ax.set_title("Colored by neighborhood Jaccard index")
ax.text(
0.99,
0.01,
"UMAP: n_neighbors={}, min_dist={}".format(
umap_object.n_neighbors, umap_object.min_dist
),
transform=ax.transAxes,
horizontalalignment="right",
color=font_color,
)
ax.set(xticks=[], yticks=[])
norm = matplotlib.colors.Normalize(vmin=vmin, vmax=vmax)
mappable = matplotlib.cm.ScalarMappable(norm=norm, cmap=cmap)
mappable.set_array(accuracy)
plt.colorbar(mappable, ax=ax)
elif diagnostic_type == "local_dim":
highd_indices, highd_dists = _nhood_search(umap_object, umap_object.n_neighbors)
data = umap_object._raw_data
local_dim = np.empty(data.shape[0], dtype=np.int64)
for i in range(data.shape[0]):
pca = sklearn.decomposition.PCA().fit(data[highd_indices[i]])
local_dim[i] = np.where(
np.cumsum(pca.explained_variance_ratio_) > local_variance_threshold
)[0][0]
vmin = np.percentile(local_dim, 5)
vmax = np.percentile(local_dim, 95)
ax.scatter(
points[:, 0],
points[:, 1],
s=point_size,
c=local_dim,
cmap=cmap,
vmin=vmin,
vmax=vmax,
)
ax.set_title("Colored by approx local dimension")
ax.text(
0.99,
0.01,
"UMAP: n_neighbors={}, min_dist={}".format(
umap_object.n_neighbors, umap_object.min_dist
),
transform=ax.transAxes,
horizontalalignment="right",
color=font_color,
)
ax.set(xticks=[], yticks=[])
norm = matplotlib.colors.Normalize(vmin=vmin, vmax=vmax)
mappable = matplotlib.cm.ScalarMappable(norm=norm, cmap=cmap)
mappable.set_array(local_dim)
plt.colorbar(mappable, ax=ax)
elif diagnostic_type == "all":
cols = int(len(_diagnostic_types) ** 0.5 // 1)
rows = len(_diagnostic_types) // cols + 1
fig, axs = plt.subplots(rows, cols, figsize=(10, 10), constrained_layout=True)
axs = axs.flat
for ax in axs[len(_diagnostic_types) :]:
ax.remove()
for ax, plt_type in zip(axs, _diagnostic_types):
diagnostic(
umap_object,
diagnostic_type=plt_type,
ax=ax,
point_size=point_size / 4.0,
)
else:
raise ValueError(
"Unknown diagnostic; should be one of "
+ ", ".join(list(_diagnostic_types))
+ ' or "all"'
)
return ax | Provide a diagnostic plot or plots for a UMAP embedding.
There are a number of plots that can be helpful for diagnostic
purposes in understanding your embedding. Currently these are
restricted to methods of coloring a scatterplot of the
embedding to show more about how the embedding is representing
the data. The first class of such plots uses a linear method
that preserves global structure well to embed the data into
three dimensions, and then interprets such coordinates as a
color space -- coloring the points by their location in the
linear global structure preserving embedding. In such plots
one should look for discontinuities of colour, and consider
overall global gradients of color. The diagnostic types here
are ``'pca'``, ``'ica'``, and ``'vq'`` (vector quantization).
The second class consider the local neighbor structure. One
can either look at how well the neighbor structure is
preserved, or how the estimated local dimension of the data
varies. Both of these are available, although the local
dimension estimation is the preferred option. You can
access these are diagnostic types ``'local_dim'`` and
``'neighborhood'``.
Finally the diagnostic type ``'all'`` will provide a
grid of diagnostic plots.
Parameters
----------
umap_object: trained UMAP object
A trained UMAP object that has a 2D embedding.
diagnostic_type: str (optional, default 'pca')
The type of diagnostic plot to show. The options are
* 'pca'
* 'ica'
* 'vq'
* 'local_dim'
* 'neighborhood'
* 'all'
nhood_size: int (optional, default 15)
The size of neighborhood to compare for local
neighborhood preservation estimates.
local_variance_threshold: float (optional, default 0.8)
To estimate the local dimension we consider a PCA of
the local neighborhood and estimate the dimension
as that which provides ``local_variance_threshold``
or more of the ``variance_explained_ratio``.
ax: matplotlib axis (optional, default None)
A matplotlib axis to plot to, or, if None, a new
axis will be created and returned.
cmap: str (optional, default 'viridis')
The name of a matplotlib colormap to use for coloring
points if the ``'local_dim'`` or ``'neighborhood'``
option are selected.
point_size: int (optional, None)
If provided this will fix the point size for the
plot(s). If None then a suitable point size will
be estimated from the data.
Returns
-------
result: matplotlib axis
The result is a matplotlib axis with the relevant plot displayed.
If you are using a notebook and have ``%matplotlib inline`` set
then this will simply display inline. | diagnostic | python | lmcinnes/umap | umap/plot.py | https://github.com/lmcinnes/umap/blob/master/umap/plot.py | BSD-3-Clause |
def interactive(
umap_object,
labels=None,
values=None,
hover_data=None,
tools=None,
theme=None,
cmap="Blues",
color_key=None,
color_key_cmap="Spectral",
background="white",
width=800,
height=800,
point_size=None,
subset_points=None,
interactive_text_search=False,
interactive_text_search_columns=None,
interactive_text_search_alpha_contrast=0.95,
alpha=None,
):
"""Create an interactive bokeh plot of a UMAP embedding.
While static plots are useful, sometimes a plot that
supports interactive zooming, and hover tooltips for
individual points is much more desirable. This function
provides a simple interface for creating such plots. The
result is a bokeh plot that will be displayed in a notebook.
Note that more complex tooltips etc. will require custom
code -- this is merely meant to provide fast and easy
access to interactive plotting.
Parameters
----------
umap_object: trained UMAP object
A trained UMAP object that has a 2D embedding.
labels: array, shape (n_samples,) (optional, default None)
An array of labels (assumed integer or categorical),
one for each data sample.
This will be used for coloring the points in
the plot according to their label. Note that
this option is mutually exclusive to the ``values``
option.
values: array, shape (n_samples,) (optional, default None)
An array of values (assumed float or continuous),
one for each sample.
This will be used for coloring the points in
the plot according to a colorscale associated
to the total range of values. Note that this
option is mutually exclusive to the ``labels``
option.
hover_data: DataFrame, shape (n_samples, n_tooltip_features)
(optional, default None)
A dataframe of tooltip data. Each column of the dataframe
should be a Series of length ``n_samples`` providing a value
for each data point. Column names will be used for
identifying information within the tooltip.
tools: List (optional, default None),
Defines the tools to be configured for interactive plots.
The list can be mixed type of string and tools objects defined by
Bokeh like HoverTool. Default tool list Bokeh uses is
["pan","wheel_zoom","box_zoom","save","reset","help",].
When tools are specified, and includes hovertool, automatic tooltip
based on hover_data is not created.
theme: string (optional, default None)
A color theme to use for plotting. A small set of
predefined themes are provided which have relatively
good aesthetics. Available themes are:
* 'blue'
* 'red'
* 'green'
* 'inferno'
* 'fire'
* 'viridis'
* 'darkblue'
* 'darkred'
* 'darkgreen'
cmap: string (optional, default 'Blues')
The name of a matplotlib colormap to use for coloring
or shading points. If no labels or values are passed
this will be used for shading points according to
density (largely only of relevance for very large
datasets). If values are passed this will be used for
shading according the value. Note that if theme
is passed then this value will be overridden by the
corresponding option of the theme.
color_key: dict or array, shape (n_categories) (optional, default None)
A way to assign colors to categoricals. This can either be
an explicit dict mapping labels to colors (as strings of form
'#RRGGBB'), or an array like object providing one color for
each distinct category being provided in ``labels``. Either
way this mapping will be used to color points according to
the label. Note that if theme
is passed then this value will be overridden by the
corresponding option of the theme.
color_key_cmap: string (optional, default 'Spectral')
The name of a matplotlib colormap to use for categorical coloring.
If an explicit ``color_key`` is not given a color mapping for
categories can be generated from the label list and selecting
a matching list of colors from the given colormap. Note
that if theme
is passed then this value will be overridden by the
corresponding option of the theme.
background: string (optional, default 'white')
The color of the background. Usually this will be either
'white' or 'black', but any color name will work. Ideally
one wants to match this appropriately to the colors being
used for points etc. This is one of the things that themes
handle for you. Note that if theme
is passed then this value will be overridden by the
corresponding option of the theme.
width: int (optional, default 800)
The desired width of the plot in pixels.
height: int (optional, default 800)
The desired height of the plot in pixels
point_size: int (optional, default None)
The size of each point marker
subset_points: array, shape (n_samples,) (optional, default None)
A way to select a subset of points based on an array of boolean
values.
interactive_text_search: bool (optional, default False)
Whether to include a text search widget above the interactive plot
interactive_text_search_columns: list (optional, default None)
Columns of data source to search. Searches labels and hover_data by default.
interactive_text_search_alpha_contrast: float (optional, default 0.95)
Alpha value for points matching text search. Alpha value for points
not matching text search will be 1 - interactive_text_search_alpha_contrast
alpha: float (optional, default: None)
The alpha blending value, between 0 (transparent) and 1 (opaque).
Returns
-------
"""
if theme is not None:
cmap = _themes[theme]["cmap"]
color_key_cmap = _themes[theme]["color_key_cmap"]
background = _themes[theme]["background"]
if labels is not None and values is not None:
raise ValueError(
"Conflicting options; only one of labels or values should be set"
)
if alpha is not None:
if not 0.0 <= alpha <= 1.0:
raise ValueError("Alpha must be between 0 and 1 inclusive")
points = _get_embedding(umap_object)
if subset_points is not None:
if len(subset_points) != points.shape[0]:
raise ValueError(
"Size of subset points ({}) does not match number of input points ({})".format(
len(subset_points), points.shape[0]
)
)
points = points[subset_points]
if points.shape[1] != 2:
raise ValueError("Plotting is currently only implemented for 2D embeddings")
if point_size is None:
point_size = 100.0 / np.sqrt(points.shape[0])
data = pd.DataFrame(_get_embedding(umap_object), columns=("x", "y"))
if labels is not None:
data["label"] = np.asarray(labels)
if color_key is None:
unique_labels = np.unique(labels)
num_labels = unique_labels.shape[0]
color_key = _to_hex(
plt.get_cmap(color_key_cmap)(np.linspace(0, 1, num_labels))
)
if isinstance(color_key, dict):
data["color"] = pd.Series(labels).map(color_key)
else:
unique_labels = np.unique(labels)
if len(color_key) < unique_labels.shape[0]:
raise ValueError(
"Color key must have enough colors for the number of labels"
)
new_color_key = {k: color_key[i] for i, k in enumerate(unique_labels)}
data["color"] = pd.Series(labels).map(new_color_key)
colors = "color"
elif values is not None:
data["value"] = np.asarray(values)
palette = _to_hex(plt.get_cmap(cmap)(np.linspace(0, 1, 256)))
colors = btr.linear_cmap(
"value", palette, low=np.min(values), high=np.max(values)
)
else:
colors = matplotlib.colors.rgb2hex(plt.get_cmap(cmap)(0.5))
if subset_points is not None:
data = data[subset_points]
if hover_data is not None:
hover_data = hover_data[subset_points]
if points.shape[0] <= width * height // 10:
tooltips = None
tooltip_needed = True
if hover_data is not None:
tooltip_dict = {}
for col_name in hover_data:
data[col_name] = hover_data[col_name]
tooltip_dict[col_name] = "@{" + col_name + "}"
tooltips = list(tooltip_dict.items())
if tools is not None:
for _tool in tools:
if _tool.__class__.__name__ == "HoverTool":
tooltip_needed = False
break
if alpha is not None:
data["alpha"] = alpha
else:
data["alpha"] = 1
# bpl.output_notebook(hide_banner=True) # this doesn't work for non-notebook use
data_source = bpl.ColumnDataSource(data)
plot = bpl.figure(
width=width,
height=height,
tooltips=None if not tooltip_needed else tooltips,
tools=(
tools
if tools is not None
else "pan,wheel_zoom,box_zoom,save,reset,help"
),
background_fill_color=background,
)
plot.circle(
x="x",
y="y",
source=data_source,
color=colors,
size=point_size,
alpha="alpha",
)
plot.grid.visible = False
plot.axis.visible = False
if interactive_text_search:
text_input = TextInput(value="", title="Search:")
if interactive_text_search_columns is None:
interactive_text_search_columns = []
if hover_data is not None:
interactive_text_search_columns.extend(hover_data.columns)
if labels is not None:
interactive_text_search_columns.append("label")
if len(interactive_text_search_columns) == 0:
warn(
"interactive_text_search_columns set to True, but no hover_data or labels provided."
"Please provide hover_data or labels to use interactive text search."
)
else:
callback = CustomJS(
args=dict(
source=data_source,
matching_alpha=interactive_text_search_alpha_contrast,
non_matching_alpha=1 - interactive_text_search_alpha_contrast,
search_columns=interactive_text_search_columns,
),
code="""
var data = source.data;
var text_search = cb_obj.value;
var search_columns_dict = {}
for (var col in search_columns){
search_columns_dict[col] = search_columns[col]
}
// Loop over columns and values
// If there is no match for any column for a given row, change the alpha value
var string_match = false;
for (var i = 0; i < data.x.length; i++) {
string_match = false
for (var j in search_columns_dict) {
if (String(data[search_columns_dict[j]][i]).includes(text_search) ) {
string_match = true
}
}
if (string_match){
data['alpha'][i] = matching_alpha
}else{
data['alpha'][i] = non_matching_alpha
}
}
source.change.emit();
""",
)
text_input.js_on_change("value", callback)
plot = column(text_input, plot)
# bpl.show(plot)
else:
if hover_data is not None:
warn(
"Too many points for hover data -- tooltips will not"
"be displayed. Sorry; try subsampling your data."
)
if interactive_text_search:
warn("Too many points for text search." "Sorry; try subsampling your data.")
if alpha is not None:
warn("Alpha parameter will not be applied on holoviews plots")
hv.extension("bokeh")
hv.output(size=300)
hv.opts.defaults(hv.opts.RGB(bgcolor=background, xaxis=None, yaxis=None))
if labels is not None:
point_plot = hv.Points(data, kdims=["x", "y"])
plot = hd.datashade(
point_plot,
aggregator=ds.count_cat("color"),
color_key=color_key,
cmap=plt.get_cmap(cmap),
width=width,
height=height,
)
elif values is not None:
min_val = data.values.min()
val_range = data.values.max() - min_val
data["val_cat"] = pd.Categorical(
(data.values - min_val) // (val_range // 256)
)
point_plot = hv.Points(data, kdims=["x", "y"], vdims=["val_cat"])
plot = hd.datashade(
point_plot,
aggregator=ds.count_cat("val_cat"),
cmap=plt.get_cmap(cmap),
width=width,
height=height,
)
else:
point_plot = hv.Points(data, kdims=["x", "y"])
plot = hd.datashade(
point_plot,
aggregator=ds.count(),
cmap=plt.get_cmap(cmap),
width=width,
height=height,
)
return plot | Create an interactive bokeh plot of a UMAP embedding.
While static plots are useful, sometimes a plot that
supports interactive zooming, and hover tooltips for
individual points is much more desirable. This function
provides a simple interface for creating such plots. The
result is a bokeh plot that will be displayed in a notebook.
Note that more complex tooltips etc. will require custom
code -- this is merely meant to provide fast and easy
access to interactive plotting.
Parameters
----------
umap_object: trained UMAP object
A trained UMAP object that has a 2D embedding.
labels: array, shape (n_samples,) (optional, default None)
An array of labels (assumed integer or categorical),
one for each data sample.
This will be used for coloring the points in
the plot according to their label. Note that
this option is mutually exclusive to the ``values``
option.
values: array, shape (n_samples,) (optional, default None)
An array of values (assumed float or continuous),
one for each sample.
This will be used for coloring the points in
the plot according to a colorscale associated
to the total range of values. Note that this
option is mutually exclusive to the ``labels``
option.
hover_data: DataFrame, shape (n_samples, n_tooltip_features)
(optional, default None)
A dataframe of tooltip data. Each column of the dataframe
should be a Series of length ``n_samples`` providing a value
for each data point. Column names will be used for
identifying information within the tooltip.
tools: List (optional, default None),
Defines the tools to be configured for interactive plots.
The list can be mixed type of string and tools objects defined by
Bokeh like HoverTool. Default tool list Bokeh uses is
["pan","wheel_zoom","box_zoom","save","reset","help",].
When tools are specified, and includes hovertool, automatic tooltip
based on hover_data is not created.
theme: string (optional, default None)
A color theme to use for plotting. A small set of
predefined themes are provided which have relatively
good aesthetics. Available themes are:
* 'blue'
* 'red'
* 'green'
* 'inferno'
* 'fire'
* 'viridis'
* 'darkblue'
* 'darkred'
* 'darkgreen'
cmap: string (optional, default 'Blues')
The name of a matplotlib colormap to use for coloring
or shading points. If no labels or values are passed
this will be used for shading points according to
density (largely only of relevance for very large
datasets). If values are passed this will be used for
shading according the value. Note that if theme
is passed then this value will be overridden by the
corresponding option of the theme.
color_key: dict or array, shape (n_categories) (optional, default None)
A way to assign colors to categoricals. This can either be
an explicit dict mapping labels to colors (as strings of form
'#RRGGBB'), or an array like object providing one color for
each distinct category being provided in ``labels``. Either
way this mapping will be used to color points according to
the label. Note that if theme
is passed then this value will be overridden by the
corresponding option of the theme.
color_key_cmap: string (optional, default 'Spectral')
The name of a matplotlib colormap to use for categorical coloring.
If an explicit ``color_key`` is not given a color mapping for
categories can be generated from the label list and selecting
a matching list of colors from the given colormap. Note
that if theme
is passed then this value will be overridden by the
corresponding option of the theme.
background: string (optional, default 'white')
The color of the background. Usually this will be either
'white' or 'black', but any color name will work. Ideally
one wants to match this appropriately to the colors being
used for points etc. This is one of the things that themes
handle for you. Note that if theme
is passed then this value will be overridden by the
corresponding option of the theme.
width: int (optional, default 800)
The desired width of the plot in pixels.
height: int (optional, default 800)
The desired height of the plot in pixels
point_size: int (optional, default None)
The size of each point marker
subset_points: array, shape (n_samples,) (optional, default None)
A way to select a subset of points based on an array of boolean
values.
interactive_text_search: bool (optional, default False)
Whether to include a text search widget above the interactive plot
interactive_text_search_columns: list (optional, default None)
Columns of data source to search. Searches labels and hover_data by default.
interactive_text_search_alpha_contrast: float (optional, default 0.95)
Alpha value for points matching text search. Alpha value for points
not matching text search will be 1 - interactive_text_search_alpha_contrast
alpha: float (optional, default: None)
The alpha blending value, between 0 (transparent) and 1 (opaque).
Returns
------- | interactive | python | lmcinnes/umap | umap/plot.py | https://github.com/lmcinnes/umap/blob/master/umap/plot.py | BSD-3-Clause |
def nearest_neighbour_distribution(umap_object, bins=25, ax=None):
"""Create a histogram of the average distance to each points
nearest neighbors.
Parameters
----------
umap_object: trained UMAP object
A trained UMAP object that has an embedding.
bins: int (optional, default 25)
Number of bins to put the points into
ax: matplotlib axis (optional, default None)
A matplotlib axis to plot to, or, if None, a new
axis will be created and returned.
Returns
-------
"""
nn_distances = average_nn_distance(umap_object.graph_)
if ax is None:
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_xlabel(f"Average distance to nearest neighbors")
ax.set_ylabel("Frequency")
ax.hist(nn_distances, bins=bins)
return ax | Create a histogram of the average distance to each points
nearest neighbors.
Parameters
----------
umap_object: trained UMAP object
A trained UMAP object that has an embedding.
bins: int (optional, default 25)
Number of bins to put the points into
ax: matplotlib axis (optional, default None)
A matplotlib axis to plot to, or, if None, a new
axis will be created and returned.
Returns
------- | nearest_neighbour_distribution | python | lmcinnes/umap | umap/plot.py | https://github.com/lmcinnes/umap/blob/master/umap/plot.py | BSD-3-Clause |
def clip(val):
"""Standard clamping of a value into a fixed range (in this case -4.0 to
4.0)
Parameters
----------
val: float
The value to be clamped.
Returns
-------
The clamped value, now fixed to be in the range -4.0 to 4.0.
"""
if val > 4.0:
return 4.0
elif val < -4.0:
return -4.0
else:
return val | Standard clamping of a value into a fixed range (in this case -4.0 to
4.0)
Parameters
----------
val: float
The value to be clamped.
Returns
-------
The clamped value, now fixed to be in the range -4.0 to 4.0. | clip | python | lmcinnes/umap | umap/layouts.py | https://github.com/lmcinnes/umap/blob/master/umap/layouts.py | BSD-3-Clause |
def rdist(x, y):
"""Reduced Euclidean distance.
Parameters
----------
x: array of shape (embedding_dim,)
y: array of shape (embedding_dim,)
Returns
-------
The squared euclidean distance between x and y
"""
result = 0.0
dim = x.shape[0]
for i in range(dim):
diff = x[i] - y[i]
result += diff * diff
return result | Reduced Euclidean distance.
Parameters
----------
x: array of shape (embedding_dim,)
y: array of shape (embedding_dim,)
Returns
-------
The squared euclidean distance between x and y | rdist | python | lmcinnes/umap | umap/layouts.py | https://github.com/lmcinnes/umap/blob/master/umap/layouts.py | BSD-3-Clause |
def optimize_layout_euclidean(
head_embedding,
tail_embedding,
head,
tail,
n_epochs,
n_vertices,
epochs_per_sample,
a,
b,
rng_state,
gamma=1.0,
initial_alpha=1.0,
negative_sample_rate=5.0,
parallel=False,
verbose=False,
densmap=False,
densmap_kwds=None,
tqdm_kwds=None,
move_other=False,
):
"""Improve an embedding using stochastic gradient descent to minimize the
fuzzy set cross entropy between the 1-skeletons of the high dimensional
and low dimensional fuzzy simplicial sets. In practice this is done by
sampling edges based on their membership strength (with the (1-p) terms
coming from negative sampling similar to word2vec).
Parameters
----------
head_embedding: array of shape (n_samples, n_components)
The initial embedding to be improved by SGD.
tail_embedding: array of shape (source_samples, n_components)
The reference embedding of embedded points. If not embedding new
previously unseen points with respect to an existing embedding this
is simply the head_embedding (again); otherwise it provides the
existing embedding to embed with respect to.
head: array of shape (n_1_simplices)
The indices of the heads of 1-simplices with non-zero membership.
tail: array of shape (n_1_simplices)
The indices of the tails of 1-simplices with non-zero membership.
n_epochs: int, or list of int
The number of training epochs to use in optimization, or a list of
epochs at which to save the embedding. In case of a list, the optimization
will use the maximum number of epochs in the list, and will return a list
of embedding in the order of increasing epoch, regardless of the order in
the epoch list.
n_vertices: int
The number of vertices (0-simplices) in the dataset.
epochs_per_sample: array of shape (n_1_simplices)
A float value of the number of epochs per 1-simplex. 1-simplices with
weaker membership strength will have more epochs between being sampled.
a: float
Parameter of differentiable approximation of right adjoint functor
b: float
Parameter of differentiable approximation of right adjoint functor
rng_state: array of int64, shape (3,)
The internal state of the rng
gamma: float (optional, default 1.0)
Weight to apply to negative samples.
initial_alpha: float (optional, default 1.0)
Initial learning rate for the SGD.
negative_sample_rate: int (optional, default 5)
Number of negative samples to use per positive sample.
parallel: bool (optional, default False)
Whether to run the computation using numba parallel.
Running in parallel is non-deterministic, and is not used
if a random seed has been set, to ensure reproducibility.
verbose: bool (optional, default False)
Whether to report information on the current progress of the algorithm.
densmap: bool (optional, default False)
Whether to use the density-augmented densMAP objective
densmap_kwds: dict (optional, default None)
Auxiliary data for densMAP
tqdm_kwds: dict (optional, default None)
Keyword arguments for tqdm progress bar.
move_other: bool (optional, default False)
Whether to adjust tail_embedding alongside head_embedding
Returns
-------
embedding: array of shape (n_samples, n_components)
The optimized embedding.
"""
dim = head_embedding.shape[1]
alpha = initial_alpha
epochs_per_negative_sample = epochs_per_sample / negative_sample_rate
epoch_of_next_negative_sample = epochs_per_negative_sample.copy()
epoch_of_next_sample = epochs_per_sample.copy()
# Fix for calling UMAP many times for small datasets, otherwise we spend here
# a lot of time in compilation step (first call to numba function)
optimize_fn = _get_optimize_layout_euclidean_single_epoch_fn(parallel)
if densmap_kwds is None:
densmap_kwds = {}
if tqdm_kwds is None:
tqdm_kwds = {}
if densmap:
dens_init_fn = numba.njit(
_optimize_layout_euclidean_densmap_epoch_init,
fastmath=True,
parallel=parallel,
)
dens_mu_tot = np.sum(densmap_kwds["mu_sum"]) / 2
dens_lambda = densmap_kwds["lambda"]
dens_R = densmap_kwds["R"]
dens_mu = densmap_kwds["mu"]
dens_phi_sum = np.zeros(n_vertices, dtype=np.float32)
dens_re_sum = np.zeros(n_vertices, dtype=np.float32)
dens_var_shift = densmap_kwds["var_shift"]
else:
dens_mu_tot = 0
dens_lambda = 0
dens_R = np.zeros(1, dtype=np.float32)
dens_mu = np.zeros(1, dtype=np.float32)
dens_phi_sum = np.zeros(1, dtype=np.float32)
dens_re_sum = np.zeros(1, dtype=np.float32)
epochs_list = None
embedding_list = []
if isinstance(n_epochs, list):
epochs_list = n_epochs
n_epochs = max(epochs_list)
if "disable" not in tqdm_kwds:
tqdm_kwds["disable"] = not verbose
rng_state_per_sample = np.full(
(head_embedding.shape[0], len(rng_state)), rng_state, dtype=np.int64
) + head_embedding[:, 0].astype(np.float64).view(np.int64).reshape(-1, 1)
for n in tqdm(range(n_epochs), **tqdm_kwds):
densmap_flag = (
densmap
and (densmap_kwds["lambda"] > 0)
and (((n + 1) / float(n_epochs)) > (1 - densmap_kwds["frac"]))
)
if densmap_flag:
# FIXME: dens_init_fn might be referenced before assignment
dens_init_fn(
head_embedding,
tail_embedding,
head,
tail,
a,
b,
dens_re_sum,
dens_phi_sum,
)
# FIXME: dens_var_shift might be referenced before assignment
dens_re_std = np.sqrt(np.var(dens_re_sum) + dens_var_shift)
dens_re_mean = np.mean(dens_re_sum)
dens_re_cov = np.dot(dens_re_sum, dens_R) / (n_vertices - 1)
else:
dens_re_std = 0
dens_re_mean = 0
dens_re_cov = 0
optimize_fn(
head_embedding,
tail_embedding,
head,
tail,
n_vertices,
epochs_per_sample,
a,
b,
rng_state_per_sample,
gamma,
dim,
move_other,
alpha,
epochs_per_negative_sample,
epoch_of_next_negative_sample,
epoch_of_next_sample,
n,
densmap_flag,
dens_phi_sum,
dens_re_sum,
dens_re_cov,
dens_re_std,
dens_re_mean,
dens_lambda,
dens_R,
dens_mu,
dens_mu_tot,
)
alpha = initial_alpha * (1.0 - (float(n) / float(n_epochs)))
if verbose and n % int(n_epochs / 10) == 0:
print("\tcompleted ", n, " / ", n_epochs, "epochs")
if epochs_list is not None and n in epochs_list:
embedding_list.append(head_embedding.copy())
# Add the last embedding to the list as well
if epochs_list is not None:
embedding_list.append(head_embedding.copy())
return head_embedding if epochs_list is None else embedding_list | Improve an embedding using stochastic gradient descent to minimize the
fuzzy set cross entropy between the 1-skeletons of the high dimensional
and low dimensional fuzzy simplicial sets. In practice this is done by
sampling edges based on their membership strength (with the (1-p) terms
coming from negative sampling similar to word2vec).
Parameters
----------
head_embedding: array of shape (n_samples, n_components)
The initial embedding to be improved by SGD.
tail_embedding: array of shape (source_samples, n_components)
The reference embedding of embedded points. If not embedding new
previously unseen points with respect to an existing embedding this
is simply the head_embedding (again); otherwise it provides the
existing embedding to embed with respect to.
head: array of shape (n_1_simplices)
The indices of the heads of 1-simplices with non-zero membership.
tail: array of shape (n_1_simplices)
The indices of the tails of 1-simplices with non-zero membership.
n_epochs: int, or list of int
The number of training epochs to use in optimization, or a list of
epochs at which to save the embedding. In case of a list, the optimization
will use the maximum number of epochs in the list, and will return a list
of embedding in the order of increasing epoch, regardless of the order in
the epoch list.
n_vertices: int
The number of vertices (0-simplices) in the dataset.
epochs_per_sample: array of shape (n_1_simplices)
A float value of the number of epochs per 1-simplex. 1-simplices with
weaker membership strength will have more epochs between being sampled.
a: float
Parameter of differentiable approximation of right adjoint functor
b: float
Parameter of differentiable approximation of right adjoint functor
rng_state: array of int64, shape (3,)
The internal state of the rng
gamma: float (optional, default 1.0)
Weight to apply to negative samples.
initial_alpha: float (optional, default 1.0)
Initial learning rate for the SGD.
negative_sample_rate: int (optional, default 5)
Number of negative samples to use per positive sample.
parallel: bool (optional, default False)
Whether to run the computation using numba parallel.
Running in parallel is non-deterministic, and is not used
if a random seed has been set, to ensure reproducibility.
verbose: bool (optional, default False)
Whether to report information on the current progress of the algorithm.
densmap: bool (optional, default False)
Whether to use the density-augmented densMAP objective
densmap_kwds: dict (optional, default None)
Auxiliary data for densMAP
tqdm_kwds: dict (optional, default None)
Keyword arguments for tqdm progress bar.
move_other: bool (optional, default False)
Whether to adjust tail_embedding alongside head_embedding
Returns
-------
embedding: array of shape (n_samples, n_components)
The optimized embedding. | optimize_layout_euclidean | python | lmcinnes/umap | umap/layouts.py | https://github.com/lmcinnes/umap/blob/master/umap/layouts.py | BSD-3-Clause |
def optimize_layout_generic(
head_embedding,
tail_embedding,
head,
tail,
n_epochs,
n_vertices,
epochs_per_sample,
a,
b,
rng_state,
gamma=1.0,
initial_alpha=1.0,
negative_sample_rate=5.0,
output_metric=dist.euclidean,
output_metric_kwds=(),
verbose=False,
tqdm_kwds=None,
move_other=False,
):
"""Improve an embedding using stochastic gradient descent to minimize the
fuzzy set cross entropy between the 1-skeletons of the high dimensional
and low dimensional fuzzy simplicial sets. In practice this is done by
sampling edges based on their membership strength (with the (1-p) terms
coming from negative sampling similar to word2vec).
Parameters
----------
head_embedding: array of shape (n_samples, n_components)
The initial embedding to be improved by SGD.
tail_embedding: array of shape (source_samples, n_components)
The reference embedding of embedded points. If not embedding new
previously unseen points with respect to an existing embedding this
is simply the head_embedding (again); otherwise it provides the
existing embedding to embed with respect to.
head: array of shape (n_1_simplices)
The indices of the heads of 1-simplices with non-zero membership.
tail: array of shape (n_1_simplices)
The indices of the tails of 1-simplices with non-zero membership.
n_epochs: int
The number of training epochs to use in optimization.
n_vertices: int
The number of vertices (0-simplices) in the dataset.
epochs_per_sample: array of shape (n_1_simplices)
A float value of the number of epochs per 1-simplex. 1-simplices with
weaker membership strength will have more epochs between being sampled.
a: float
Parameter of differentiable approximation of right adjoint functor
b: float
Parameter of differentiable approximation of right adjoint functor
rng_state: array of int64, shape (3,)
The internal state of the rng
gamma: float (optional, default 1.0)
Weight to apply to negative samples.
initial_alpha: float (optional, default 1.0)
Initial learning rate for the SGD.
negative_sample_rate: int (optional, default 5)
Number of negative samples to use per positive sample.
verbose: bool (optional, default False)
Whether to report information on the current progress of the algorithm.
tqdm_kwds: dict (optional, default None)
Keyword arguments for tqdm progress bar.
move_other: bool (optional, default False)
Whether to adjust tail_embedding alongside head_embedding
Returns
-------
embedding: array of shape (n_samples, n_components)
The optimized embedding.
"""
dim = head_embedding.shape[1]
alpha = initial_alpha
epochs_per_negative_sample = epochs_per_sample / negative_sample_rate
epoch_of_next_negative_sample = epochs_per_negative_sample.copy()
epoch_of_next_sample = epochs_per_sample.copy()
optimize_fn = numba.njit(
_optimize_layout_generic_single_epoch,
fastmath=True,
)
if tqdm_kwds is None:
tqdm_kwds = {}
if "disable" not in tqdm_kwds:
tqdm_kwds["disable"] = not verbose
rng_state_per_sample = np.full(
(head_embedding.shape[0], len(rng_state)), rng_state, dtype=np.int64
) + head_embedding[:, 0].astype(np.float64).view(np.int64).reshape(-1, 1)
for n in tqdm(range(n_epochs), **tqdm_kwds):
optimize_fn(
epochs_per_sample,
epoch_of_next_sample,
head,
tail,
head_embedding,
tail_embedding,
output_metric,
output_metric_kwds,
dim,
alpha,
move_other,
n,
epoch_of_next_negative_sample,
epochs_per_negative_sample,
rng_state_per_sample,
n_vertices,
a,
b,
gamma,
)
alpha = initial_alpha * (1.0 - (float(n) / float(n_epochs)))
return head_embedding | Improve an embedding using stochastic gradient descent to minimize the
fuzzy set cross entropy between the 1-skeletons of the high dimensional
and low dimensional fuzzy simplicial sets. In practice this is done by
sampling edges based on their membership strength (with the (1-p) terms
coming from negative sampling similar to word2vec).
Parameters
----------
head_embedding: array of shape (n_samples, n_components)
The initial embedding to be improved by SGD.
tail_embedding: array of shape (source_samples, n_components)
The reference embedding of embedded points. If not embedding new
previously unseen points with respect to an existing embedding this
is simply the head_embedding (again); otherwise it provides the
existing embedding to embed with respect to.
head: array of shape (n_1_simplices)
The indices of the heads of 1-simplices with non-zero membership.
tail: array of shape (n_1_simplices)
The indices of the tails of 1-simplices with non-zero membership.
n_epochs: int
The number of training epochs to use in optimization.
n_vertices: int
The number of vertices (0-simplices) in the dataset.
epochs_per_sample: array of shape (n_1_simplices)
A float value of the number of epochs per 1-simplex. 1-simplices with
weaker membership strength will have more epochs between being sampled.
a: float
Parameter of differentiable approximation of right adjoint functor
b: float
Parameter of differentiable approximation of right adjoint functor
rng_state: array of int64, shape (3,)
The internal state of the rng
gamma: float (optional, default 1.0)
Weight to apply to negative samples.
initial_alpha: float (optional, default 1.0)
Initial learning rate for the SGD.
negative_sample_rate: int (optional, default 5)
Number of negative samples to use per positive sample.
verbose: bool (optional, default False)
Whether to report information on the current progress of the algorithm.
tqdm_kwds: dict (optional, default None)
Keyword arguments for tqdm progress bar.
move_other: bool (optional, default False)
Whether to adjust tail_embedding alongside head_embedding
Returns
-------
embedding: array of shape (n_samples, n_components)
The optimized embedding. | optimize_layout_generic | python | lmcinnes/umap | umap/layouts.py | https://github.com/lmcinnes/umap/blob/master/umap/layouts.py | BSD-3-Clause |
def optimize_layout_inverse(
head_embedding,
tail_embedding,
head,
tail,
weight,
sigmas,
rhos,
n_epochs,
n_vertices,
epochs_per_sample,
a,
b,
rng_state,
gamma=1.0,
initial_alpha=1.0,
negative_sample_rate=5.0,
output_metric=dist.euclidean,
output_metric_kwds=(),
verbose=False,
tqdm_kwds=None,
move_other=False,
):
"""Improve an embedding using stochastic gradient descent to minimize the
fuzzy set cross entropy between the 1-skeletons of the high dimensional
and low dimensional fuzzy simplicial sets. In practice this is done by
sampling edges based on their membership strength (with the (1-p) terms
coming from negative sampling similar to word2vec).
Parameters
----------
head_embedding: array of shape (n_samples, n_components)
The initial embedding to be improved by SGD.
tail_embedding: array of shape (source_samples, n_components)
The reference embedding of embedded points. If not embedding new
previously unseen points with respect to an existing embedding this
is simply the head_embedding (again); otherwise it provides the
existing embedding to embed with respect to.
head: array of shape (n_1_simplices)
The indices of the heads of 1-simplices with non-zero membership.
tail: array of shape (n_1_simplices)
The indices of the tails of 1-simplices with non-zero membership.
weight: array of shape (n_1_simplices)
The membership weights of the 1-simplices.
sigmas:
rhos:
n_epochs: int
The number of training epochs to use in optimization.
n_vertices: int
The number of vertices (0-simplices) in the dataset.
epochs_per_sample: array of shape (n_1_simplices)
A float value of the number of epochs per 1-simplex. 1-simplices with
weaker membership strength will have more epochs between being sampled.
a: float
Parameter of differentiable approximation of right adjoint functor
b: float
Parameter of differentiable approximation of right adjoint functor
rng_state: array of int64, shape (3,)
The internal state of the rng
gamma: float (optional, default 1.0)
Weight to apply to negative samples.
initial_alpha: float (optional, default 1.0)
Initial learning rate for the SGD.
negative_sample_rate: int (optional, default 5)
Number of negative samples to use per positive sample.
verbose: bool (optional, default False)
Whether to report information on the current progress of the algorithm.
tqdm_kwds: dict (optional, default None)
Keyword arguments for tqdm progress bar.
move_other: bool (optional, default False)
Whether to adjust tail_embedding alongside head_embedding
Returns
-------
embedding: array of shape (n_samples, n_components)
The optimized embedding.
"""
dim = head_embedding.shape[1]
alpha = initial_alpha
epochs_per_negative_sample = epochs_per_sample / negative_sample_rate
epoch_of_next_negative_sample = epochs_per_negative_sample.copy()
epoch_of_next_sample = epochs_per_sample.copy()
optimize_fn = numba.njit(
_optimize_layout_inverse_single_epoch,
fastmath=True,
)
if tqdm_kwds is None:
tqdm_kwds = {}
if "disable" not in tqdm_kwds:
tqdm_kwds["disable"] = not verbose
for n in tqdm(range(n_epochs), **tqdm_kwds):
optimize_fn(
epochs_per_sample,
epoch_of_next_sample,
head,
tail,
head_embedding,
tail_embedding,
output_metric,
output_metric_kwds,
weight,
sigmas,
dim,
alpha,
move_other,
n,
epoch_of_next_negative_sample,
epochs_per_negative_sample,
rng_state,
n_vertices,
rhos,
gamma,
)
alpha = initial_alpha * (1.0 - (float(n) / float(n_epochs)))
return head_embedding | Improve an embedding using stochastic gradient descent to minimize the
fuzzy set cross entropy between the 1-skeletons of the high dimensional
and low dimensional fuzzy simplicial sets. In practice this is done by
sampling edges based on their membership strength (with the (1-p) terms
coming from negative sampling similar to word2vec).
Parameters
----------
head_embedding: array of shape (n_samples, n_components)
The initial embedding to be improved by SGD.
tail_embedding: array of shape (source_samples, n_components)
The reference embedding of embedded points. If not embedding new
previously unseen points with respect to an existing embedding this
is simply the head_embedding (again); otherwise it provides the
existing embedding to embed with respect to.
head: array of shape (n_1_simplices)
The indices of the heads of 1-simplices with non-zero membership.
tail: array of shape (n_1_simplices)
The indices of the tails of 1-simplices with non-zero membership.
weight: array of shape (n_1_simplices)
The membership weights of the 1-simplices.
sigmas:
rhos:
n_epochs: int
The number of training epochs to use in optimization.
n_vertices: int
The number of vertices (0-simplices) in the dataset.
epochs_per_sample: array of shape (n_1_simplices)
A float value of the number of epochs per 1-simplex. 1-simplices with
weaker membership strength will have more epochs between being sampled.
a: float
Parameter of differentiable approximation of right adjoint functor
b: float
Parameter of differentiable approximation of right adjoint functor
rng_state: array of int64, shape (3,)
The internal state of the rng
gamma: float (optional, default 1.0)
Weight to apply to negative samples.
initial_alpha: float (optional, default 1.0)
Initial learning rate for the SGD.
negative_sample_rate: int (optional, default 5)
Number of negative samples to use per positive sample.
verbose: bool (optional, default False)
Whether to report information on the current progress of the algorithm.
tqdm_kwds: dict (optional, default None)
Keyword arguments for tqdm progress bar.
move_other: bool (optional, default False)
Whether to adjust tail_embedding alongside head_embedding
Returns
-------
embedding: array of shape (n_samples, n_components)
The optimized embedding. | optimize_layout_inverse | python | lmcinnes/umap | umap/layouts.py | https://github.com/lmcinnes/umap/blob/master/umap/layouts.py | BSD-3-Clause |
def __init__(
self,
batch_size=None,
dims=None,
encoder=None,
decoder=None,
parametric_reconstruction=False,
parametric_reconstruction_loss_fcn=None,
parametric_reconstruction_loss_weight=1.0,
autoencoder_loss=False,
reconstruction_validation=None,
global_correlation_loss_weight=0,
landmark_loss_fn=None,
landmark_loss_weight=1.0,
keras_fit_kwargs={},
**kwargs,
):
"""
Parametric UMAP subclassing UMAP-learn, based on keras/tensorflow.
There is also a non-parametric implementation contained within to compare
with the base non-parametric implementation.
Parameters
----------
batch_size : int, optional
size of batch used for batch training, by default None
dims : tuple, optional
dimensionality of data, if not flat (e.g. (32x32x3 images for ConvNet), by default None
encoder : keras.Sequential, optional
The encoder Keras network
decoder : keras.Sequential, optional
the decoder Keras network
parametric_reconstruction : bool, optional
Whether the decoder is parametric or non-parametric, by default False
parametric_reconstruction_loss_fcn : bool, optional
What loss function to use for parametric reconstruction,
by default keras.losses.BinaryCrossentropy
parametric_reconstruction_loss_weight : float, optional
How to weight the parametric reconstruction loss relative to umap loss, by default 1.0
autoencoder_loss : bool, optional
[description], by default False
reconstruction_validation : array, optional
validation X data for reconstruction loss, by default None
global_correlation_loss_weight : float, optional
Whether to additionally train on correlation of global pairwise relationships (>0), by default 0
landmark_loss_fn : callable, optional
The function to use for landmark loss, by default the euclidean distance
landmark_loss_weight : float, optional
How to weight the landmark loss relative to umap loss, by default 1.0
keras_fit_kwargs : dict, optional
additional arguments for model.fit (like callbacks), by default {}
"""
super().__init__(**kwargs)
# add to network
self.dims = dims # if this is an image, we should reshape for network
self.encoder = encoder # neural network used for embedding
self.decoder = decoder # neural network used for decoding
self.parametric_reconstruction = parametric_reconstruction
self.parametric_reconstruction_loss_weight = (
parametric_reconstruction_loss_weight
)
self.parametric_reconstruction_loss_fcn = parametric_reconstruction_loss_fcn
self.autoencoder_loss = autoencoder_loss
self.batch_size = batch_size
self.loss_report_frequency = 10
self.global_correlation_loss_weight = global_correlation_loss_weight
self.landmark_loss_fn = landmark_loss_fn
self.landmark_loss_weight = landmark_loss_weight
self.prev_epoch_X = None
self.window_vals = None
self.reconstruction_validation = (
reconstruction_validation # holdout data for reconstruction acc
)
self.keras_fit_kwargs = keras_fit_kwargs # arguments for model.fit
self.parametric_model = None
# Pass the random state on to keras. This will set the numpy,
# backend, and python random seeds
# For reproducable training.
if isinstance(self.random_state, int):
keras.utils.set_random_seed(self.random_state)
# How many epochs to train for
# (different than n_epochs which is specific to each sample)
self.n_training_epochs = 1
# Set optimizer.
# Adam is better for parametric_embedding. Use gradient clipping by value.
self.optimizer = keras.optimizers.Adam(1e-3, clipvalue=4.0)
if self.encoder is not None:
if encoder.outputs[0].shape[-1] != self.n_components:
raise ValueError(
(
"Dimensionality of embedder network output ({}) does"
"not match n_components ({})".format(
encoder.outputs[0].shape[-1], self.n_components
)
)
) | Parametric UMAP subclassing UMAP-learn, based on keras/tensorflow.
There is also a non-parametric implementation contained within to compare
with the base non-parametric implementation.
Parameters
----------
batch_size : int, optional
size of batch used for batch training, by default None
dims : tuple, optional
dimensionality of data, if not flat (e.g. (32x32x3 images for ConvNet), by default None
encoder : keras.Sequential, optional
The encoder Keras network
decoder : keras.Sequential, optional
the decoder Keras network
parametric_reconstruction : bool, optional
Whether the decoder is parametric or non-parametric, by default False
parametric_reconstruction_loss_fcn : bool, optional
What loss function to use for parametric reconstruction,
by default keras.losses.BinaryCrossentropy
parametric_reconstruction_loss_weight : float, optional
How to weight the parametric reconstruction loss relative to umap loss, by default 1.0
autoencoder_loss : bool, optional
[description], by default False
reconstruction_validation : array, optional
validation X data for reconstruction loss, by default None
global_correlation_loss_weight : float, optional
Whether to additionally train on correlation of global pairwise relationships (>0), by default 0
landmark_loss_fn : callable, optional
The function to use for landmark loss, by default the euclidean distance
landmark_loss_weight : float, optional
How to weight the landmark loss relative to umap loss, by default 1.0
keras_fit_kwargs : dict, optional
additional arguments for model.fit (like callbacks), by default {} | __init__ | python | lmcinnes/umap | umap/parametric_umap.py | https://github.com/lmcinnes/umap/blob/master/umap/parametric_umap.py | BSD-3-Clause |
def fit(self, X, y=None, precomputed_distances=None, landmark_positions=None):
"""Fit X into an embedded space.
Optionally use a precomputed distance matrix, y for supervised
dimension reduction, or landmarked positions.
Parameters
----------
X : array, shape (n_samples, n_features)
Contains a sample per row. If the method is 'exact', X may
be a sparse matrix of type 'csr', 'csc' or 'coo'.
Unlike UMAP, ParametricUMAP requires precomputed distances to
be passed seperately.
y : array, shape (n_samples)
A target array for supervised dimension reduction. How this is
handled is determined by parameters UMAP was instantiated with.
The relevant attributes are ``target_metric`` and
``target_metric_kwds``.
precomputed_distances : array, shape (n_samples, n_samples), optional
A precomputed a square distance matrix. Unlike UMAP, ParametricUMAP
still requires X to be passed seperately for training.
landmark_positions : array, shape (n_samples, n_components), optional
The desired position in low-dimensional space of each sample in X.
Points that are not landmarks should have nan coordinates.
"""
if (self.prev_epoch_X is not None) & (landmark_positions is None):
# Add the landmark points for training, then make a landmark vector.
landmark_positions = np.stack(
[np.array([np.nan, np.nan])]*X.shape[0] + list(
self.transform(
self.prev_epoch_X
)
)
)
X = np.concatenate((X, self.prev_epoch_X))
if landmark_positions is not None:
len_X = len(X)
len_land = len(landmark_positions)
if len_X != len_land:
raise ValueError(
f"Length of x = {len_X}, length of landmark_positions \
= {len_land}, while it must be equal."
)
if self.metric == "precomputed":
if precomputed_distances is None:
raise ValueError(
"Precomputed distances must be supplied if metric \
is precomputed."
)
# prepare X for training the network
self._X = X
# geneate the graph on precomputed distances
return super().fit(
precomputed_distances, y, landmark_positions=landmark_positions
)
else:
return super().fit(X, y, landmark_positions=landmark_positions) | Fit X into an embedded space.
Optionally use a precomputed distance matrix, y for supervised
dimension reduction, or landmarked positions.
Parameters
----------
X : array, shape (n_samples, n_features)
Contains a sample per row. If the method is 'exact', X may
be a sparse matrix of type 'csr', 'csc' or 'coo'.
Unlike UMAP, ParametricUMAP requires precomputed distances to
be passed seperately.
y : array, shape (n_samples)
A target array for supervised dimension reduction. How this is
handled is determined by parameters UMAP was instantiated with.
The relevant attributes are ``target_metric`` and
``target_metric_kwds``.
precomputed_distances : array, shape (n_samples, n_samples), optional
A precomputed a square distance matrix. Unlike UMAP, ParametricUMAP
still requires X to be passed seperately for training.
landmark_positions : array, shape (n_samples, n_components), optional
The desired position in low-dimensional space of each sample in X.
Points that are not landmarks should have nan coordinates. | fit | python | lmcinnes/umap | umap/parametric_umap.py | https://github.com/lmcinnes/umap/blob/master/umap/parametric_umap.py | BSD-3-Clause |
def fit_transform(
self, X, y=None, precomputed_distances=None, landmark_positions=None
):
"""Fit X into an embedded space.
Optionally use a precomputed distance matrix, y for supervised
dimension reduction, or landmarked positions.
Parameters
----------
X : array, shape (n_samples, n_features)
Contains a sample per row. If the method is 'exact', X may
be a sparse matrix of type 'csr', 'csc' or 'coo'.
Unlike UMAP, ParametricUMAP requires precomputed distances to
be passed seperately.
y : array, shape (n_samples)
A target array for supervised dimension reduction. How this is
handled is determined by parameters UMAP was instantiated with.
The relevant attributes are ``target_metric`` and
``target_metric_kwds``.
precomputed_distances : array, shape (n_samples, n_samples), optional
A precomputed a square distance matrix. Unlike UMAP, ParametricUMAP
still requires X to be passed seperately for training.
landmark_positions : array, shape (n_samples, n_components), optional
The desired position in low-dimensional space of each sample in X.
Points that are not landmarks should have nan coordinates.
"""
if (self.prev_epoch_X is not None) & (landmark_positions is None):
# Add the landmark points for training, then make a landmark vector.
landmark_positions = np.stack(
[np.array([np.nan, np.nan])]*X.shape[0] + list(
self.transform(
self.prev_epoch_X
)
)
)
X = np.concatenate((X, self.prev_epoch_X))
if landmark_positions is not None:
len_X = len(X)
len_land = len(landmark_positions)
if len_X != len_land:
raise ValueError(
f"Length of x = {len_X}, length of landmark_positions \
= {len_land}, while it must be equal."
)
if self.metric == "precomputed":
if precomputed_distances is None:
raise ValueError(
"Precomputed distances must be supplied if metric \
is precomputed."
)
# prepare X for training the network
self._X = X
# generate the graph on precomputed distances
# landmark positions are cleaned up inside the
# .fit() component of .fit_transform()
return super().fit_transform(
precomputed_distances, y, landmark_positions=landmark_positions
)
else:
# landmark positions are cleaned up inside the
# .fit() component of .fit_transform()
return super().fit_transform(X, y, landmark_positions=landmark_positions) | Fit X into an embedded space.
Optionally use a precomputed distance matrix, y for supervised
dimension reduction, or landmarked positions.
Parameters
----------
X : array, shape (n_samples, n_features)
Contains a sample per row. If the method is 'exact', X may
be a sparse matrix of type 'csr', 'csc' or 'coo'.
Unlike UMAP, ParametricUMAP requires precomputed distances to
be passed seperately.
y : array, shape (n_samples)
A target array for supervised dimension reduction. How this is
handled is determined by parameters UMAP was instantiated with.
The relevant attributes are ``target_metric`` and
``target_metric_kwds``.
precomputed_distances : array, shape (n_samples, n_samples), optional
A precomputed a square distance matrix. Unlike UMAP, ParametricUMAP
still requires X to be passed seperately for training.
landmark_positions : array, shape (n_samples, n_components), optional
The desired position in low-dimensional space of each sample in X.
Points that are not landmarks should have nan coordinates. | fit_transform | python | lmcinnes/umap | umap/parametric_umap.py | https://github.com/lmcinnes/umap/blob/master/umap/parametric_umap.py | BSD-3-Clause |
def transform(self, X, batch_size=None):
"""Transform X into the existing embedded space and return that
transformed output.
Parameters
----------
X : array, shape (n_samples, n_features)
New data to be transformed.
batch_size : int, optional
Batch size for inference, defaults to the self.batch_size used in training.
Returns
-------
X_new : array, shape (n_samples, n_components)
Embedding of the new data in low-dimensional space.
"""
batch_size = batch_size if batch_size else self.batch_size
return self.encoder.predict(
np.asanyarray(X), batch_size=batch_size, verbose=self.verbose
) | Transform X into the existing embedded space and return that
transformed output.
Parameters
----------
X : array, shape (n_samples, n_features)
New data to be transformed.
batch_size : int, optional
Batch size for inference, defaults to the self.batch_size used in training.
Returns
-------
X_new : array, shape (n_samples, n_components)
Embedding of the new data in low-dimensional space. | transform | python | lmcinnes/umap | umap/parametric_umap.py | https://github.com/lmcinnes/umap/blob/master/umap/parametric_umap.py | BSD-3-Clause |
def inverse_transform(self, X):
"""Transform X in the existing embedded space back into the input
data space and return that transformed output.
Parameters
----------
X : array, shape (n_samples, n_components)
New points to be inverse transformed.
Returns
-------
X_new : array, shape (n_samples, n_features)
Generated data points new data in data space.
"""
if self.parametric_reconstruction:
return self.decoder.predict(
np.asanyarray(X), batch_size=self.batch_size, verbose=self.verbose
)
else:
return super().inverse_transform(X) | Transform X in the existing embedded space back into the input
data space and return that transformed output.
Parameters
----------
X : array, shape (n_samples, n_components)
New points to be inverse transformed.
Returns
-------
X_new : array, shape (n_samples, n_features)
Generated data points new data in data space. | inverse_transform | python | lmcinnes/umap | umap/parametric_umap.py | https://github.com/lmcinnes/umap/blob/master/umap/parametric_umap.py | BSD-3-Clause |
def _define_model(self):
"""Define the model in keras"""
prlw = self.parametric_reconstruction_loss_weight
self.parametric_model = UMAPModel(
self._a,
self._b,
negative_sample_rate=self.negative_sample_rate,
encoder=self.encoder,
decoder=self.decoder,
parametric_reconstruction_loss_fn=self.parametric_reconstruction_loss_fcn,
parametric_reconstruction=self.parametric_reconstruction,
parametric_reconstruction_loss_weight=prlw,
global_correlation_loss_weight=self.global_correlation_loss_weight,
autoencoder_loss=self.autoencoder_loss,
landmark_loss_fn=self.landmark_loss_fn,
landmark_loss_weight=self.landmark_loss_weight,
optimizer=self.optimizer,
) | Define the model in keras | _define_model | python | lmcinnes/umap | umap/parametric_umap.py | https://github.com/lmcinnes/umap/blob/master/umap/parametric_umap.py | BSD-3-Clause |
def add_landmarks(
self,
X,
sample_pct=0.01,
sample_mode="uniform",
landmark_loss_weight=0.01,
idx=None,
):
"""Add some points from a dataset X as "landmarks."
Parameters
----------
X : array, shape (n_samples, n_features)
Old data to be retained.
sample_pct : float, optional
Percentage of old data to use as landmarks.
sample_mode : str, optional
Method for sampling points. Allows "uniform" and "predefined."
landmark_loss_weight : float, optional
Multiplier for landmark loss function.
"""
self.sample_pct = sample_pct
self.sample_mode = sample_mode
self.landmark_loss_weight = landmark_loss_weight
if self.sample_mode == "uniform":
self.prev_epoch_idx = list(
np.random.choice(
range(X.shape[0]), int(X.shape[0]*sample_pct), replace=False
)
)
self.prev_epoch_X = X[self.prev_epoch_idx]
elif self.sample_mode == "predetermined":
if idx is None:
raise ValueError(
"Choice of sample_mode is not supported."
)
else:
self.prev_epoch_idx = idx
self.prev_epoch_X = X[self.prev_epoch_idx]
else:
raise ValueError(
"Choice of sample_mode is not supported."
) | Add some points from a dataset X as "landmarks."
Parameters
----------
X : array, shape (n_samples, n_features)
Old data to be retained.
sample_pct : float, optional
Percentage of old data to use as landmarks.
sample_mode : str, optional
Method for sampling points. Allows "uniform" and "predefined."
landmark_loss_weight : float, optional
Multiplier for landmark loss function. | add_landmarks | python | lmcinnes/umap | umap/parametric_umap.py | https://github.com/lmcinnes/umap/blob/master/umap/parametric_umap.py | BSD-3-Clause |
def to_ONNX(self, save_location):
"""Exports trained parametric UMAP as ONNX."""
# Extract encoder
km = self.encoder
# Extract weights
pm = PumapNet(self.dims[0], self.n_components)
pm = weight_copier(km, pm)
# Put in ONNX
dummy_input = torch.randn(1, self.dims[0])
# Invoke export
return torch.onnx.export(pm, dummy_input, save_location) | Exports trained parametric UMAP as ONNX. | to_ONNX | python | lmcinnes/umap | umap/parametric_umap.py | https://github.com/lmcinnes/umap/blob/master/umap/parametric_umap.py | BSD-3-Clause |
def get_graph_elements(graph_, n_epochs):
"""
gets elements of graphs, weights, and number of epochs per edge
Parameters
----------
graph_ : scipy.sparse.csr.csr_matrix
umap graph of probabilities
n_epochs : int
maximum number of epochs per edge
Returns
-------
graph scipy.sparse.csr.csr_matrix
umap graph
epochs_per_sample np.array
number of epochs to train each sample for
head np.array
edge head
tail np.array
edge tail
weight np.array
edge weight
n_vertices int
number of vertices in graph
"""
### should we remove redundancies () here??
# graph_ = remove_redundant_edges(graph_)
graph = graph_.tocoo()
# eliminate duplicate entries by summing them together
graph.sum_duplicates()
# number of vertices in dataset
n_vertices = graph.shape[1]
# get the number of epochs based on the size of the dataset
if n_epochs is None:
# For smaller datasets we can use more epochs
if graph.shape[0] <= 10000:
n_epochs = 500
else:
n_epochs = 200
# remove elements with very low probability
graph.data[graph.data < (graph.data.max() / float(n_epochs))] = 0.0
graph.eliminate_zeros()
# get epochs per sample based upon edge probability
epochs_per_sample = n_epochs * graph.data
head = graph.row
tail = graph.col
weight = graph.data
return graph, epochs_per_sample, head, tail, weight, n_vertices | gets elements of graphs, weights, and number of epochs per edge
Parameters
----------
graph_ : scipy.sparse.csr.csr_matrix
umap graph of probabilities
n_epochs : int
maximum number of epochs per edge
Returns
-------
graph scipy.sparse.csr.csr_matrix
umap graph
epochs_per_sample np.array
number of epochs to train each sample for
head np.array
edge head
tail np.array
edge tail
weight np.array
edge weight
n_vertices int
number of vertices in graph | get_graph_elements | python | lmcinnes/umap | umap/parametric_umap.py | https://github.com/lmcinnes/umap/blob/master/umap/parametric_umap.py | BSD-3-Clause |
def init_embedding_from_graph(
_raw_data, graph, n_components, random_state, metric, _metric_kwds, init="spectral"
):
"""Initialize embedding using graph. This is for direct embeddings.
Parameters
----------
init : str, optional
Type of initialization to use. Either random, or spectral, by default "spectral"
Returns
-------
embedding : np.array
the initialized embedding
"""
if random_state is None:
random_state = check_random_state(None)
if isinstance(init, str) and init == "random":
embedding = random_state.uniform(
low=-10.0, high=10.0, size=(graph.shape[0], n_components)
).astype(np.float32)
elif isinstance(init, str) and init == "spectral":
# We add a little noise to avoid local minima for optimization to come
initialisation = spectral_layout(
_raw_data,
graph,
n_components,
random_state,
metric=metric,
metric_kwds=_metric_kwds,
)
expansion = 10.0 / np.abs(initialisation).max()
embedding = (initialisation * expansion).astype(
np.float32
) + random_state.normal(
scale=0.0001, size=[graph.shape[0], n_components]
).astype(
np.float32
)
else:
init_data = np.array(init)
if len(init_data.shape) == 2:
if np.unique(init_data, axis=0).shape[0] < init_data.shape[0]:
tree = KDTree(init_data)
dist, ind = tree.query(init_data, k=2)
nndist = np.mean(dist[:, 1])
embedding = init_data + random_state.normal(
scale=0.001 * nndist, size=init_data.shape
).astype(np.float32)
else:
embedding = init_data
return embedding | Initialize embedding using graph. This is for direct embeddings.
Parameters
----------
init : str, optional
Type of initialization to use. Either random, or spectral, by default "spectral"
Returns
-------
embedding : np.array
the initialized embedding | init_embedding_from_graph | python | lmcinnes/umap | umap/parametric_umap.py | https://github.com/lmcinnes/umap/blob/master/umap/parametric_umap.py | BSD-3-Clause |
def convert_distance_to_log_probability(distances, a=1.0, b=1.0):
"""
convert distance representation into log probability,
as a function of a, b params
Parameters
----------
distances : array
euclidean distance between two points in embedding
a : float, optional
parameter based on min_dist, by default 1.0
b : float, optional
parameter based on min_dist, by default 1.0
Returns
-------
float
log probability in embedding space
"""
return -ops.log1p(a * distances ** (2 * b)) | convert distance representation into log probability,
as a function of a, b params
Parameters
----------
distances : array
euclidean distance between two points in embedding
a : float, optional
parameter based on min_dist, by default 1.0
b : float, optional
parameter based on min_dist, by default 1.0
Returns
-------
float
log probability in embedding space | convert_distance_to_log_probability | python | lmcinnes/umap | umap/parametric_umap.py | https://github.com/lmcinnes/umap/blob/master/umap/parametric_umap.py | BSD-3-Clause |
def compute_cross_entropy(
probabilities_graph, log_probabilities_distance, EPS=1e-4, repulsion_strength=1.0
):
"""
Compute cross entropy between low and high probability
Parameters
----------
probabilities_graph : array
high dimensional probabilities
log_probabilities_distance : array
low dimensional log probabilities
EPS : float, optional
offset to ensure log is taken of a positive number, by default 1e-4
repulsion_strength : float, optional
strength of repulsion between negative samples, by default 1.0
Returns
-------
attraction_term: float
attraction term for cross entropy loss
repellant_term: float
repellent term for cross entropy loss
cross_entropy: float
cross entropy umap loss
"""
# cross entropy
attraction_term = -probabilities_graph * ops.log_sigmoid(log_probabilities_distance)
# use numerically stable repellent term
# Shi et al. 2022 (https://arxiv.org/abs/2111.08851)
# log(1 - sigmoid(logits)) = log(sigmoid(logits)) - logits
repellant_term = (
-(1.0 - probabilities_graph)
* (ops.log_sigmoid(log_probabilities_distance) - log_probabilities_distance)
* repulsion_strength
)
# balance the expected losses between attraction and repel
CE = attraction_term + repellant_term
return attraction_term, repellant_term, CE | Compute cross entropy between low and high probability
Parameters
----------
probabilities_graph : array
high dimensional probabilities
log_probabilities_distance : array
low dimensional log probabilities
EPS : float, optional
offset to ensure log is taken of a positive number, by default 1e-4
repulsion_strength : float, optional
strength of repulsion between negative samples, by default 1.0
Returns
-------
attraction_term: float
attraction term for cross entropy loss
repellant_term: float
repellent term for cross entropy loss
cross_entropy: float
cross entropy umap loss | compute_cross_entropy | python | lmcinnes/umap | umap/parametric_umap.py | https://github.com/lmcinnes/umap/blob/master/umap/parametric_umap.py | BSD-3-Clause |
def prepare_networks(
encoder,
decoder,
n_components,
dims,
n_data,
parametric_reconstruction,
init_embedding=None,
):
"""
Generates a set of keras networks for the encoder and decoder if one has not already
been predefined.
Parameters
----------
encoder : keras.Sequential
The encoder Keras network
decoder : keras.Sequential
the decoder Keras network
n_components : int
the dimensionality of the latent space
dims : tuple of shape (dim1, dim2, dim3...)
dimensionality of data
n_data : number of elements in dataset
# of elements in training dataset
parametric_reconstruction : bool
Whether the decoder is parametric or non-parametric
init_embedding : array (optional, default None)
The initial embedding, for nonparametric embeddings
Returns
-------
encoder: keras.Sequential
encoder keras network
decoder: keras.Sequential
decoder keras network
"""
if encoder is None:
encoder = keras.Sequential(
[
keras.layers.Input(shape=dims),
keras.layers.Flatten(),
keras.layers.Dense(units=100, activation="relu"),
keras.layers.Dense(units=100, activation="relu"),
keras.layers.Dense(units=100, activation="relu"),
keras.layers.Dense(units=n_components, name="z"),
]
)
if decoder is None:
if parametric_reconstruction:
decoder = keras.Sequential(
[
keras.layers.Input(shape=(n_components,)),
keras.layers.Dense(units=100, activation="relu"),
keras.layers.Dense(units=100, activation="relu"),
keras.layers.Dense(units=100, activation="relu"),
keras.layers.Dense(
units=np.prod(dims), name="recon", activation=None
),
keras.layers.Reshape(dims),
]
)
return encoder, decoder | Generates a set of keras networks for the encoder and decoder if one has not already
been predefined.
Parameters
----------
encoder : keras.Sequential
The encoder Keras network
decoder : keras.Sequential
the decoder Keras network
n_components : int
the dimensionality of the latent space
dims : tuple of shape (dim1, dim2, dim3...)
dimensionality of data
n_data : number of elements in dataset
# of elements in training dataset
parametric_reconstruction : bool
Whether the decoder is parametric or non-parametric
init_embedding : array (optional, default None)
The initial embedding, for nonparametric embeddings
Returns
-------
encoder: keras.Sequential
encoder keras network
decoder: keras.Sequential
decoder keras network | prepare_networks | python | lmcinnes/umap | umap/parametric_umap.py | https://github.com/lmcinnes/umap/blob/master/umap/parametric_umap.py | BSD-3-Clause |
def construct_edge_dataset(
X,
graph_,
n_epochs,
batch_size,
parametric_reconstruction,
global_correlation_loss_weight,
landmark_positions=None,
):
"""
Construct a tf.data.Dataset of edges, sampled by edge weight.
Parameters
----------
X : array, shape (n_samples, n_features)
New data to be transformed.
graph_ : scipy.sparse.csr.csr_matrix
Generated UMAP graph
n_epochs : int
# of epochs to train each edge
batch_size : int
batch size
parametric_reconstruction : bool
Whether the decoder is parametric or non-parametric
landmark_positions : array, shape (n_samples, n_components), optional
The desired position in low-dimensional space of each sample in X.
Points that are not landmarks should have nan coordinates.
"""
def gather_index(tensor, index):
return tensor[index]
# if X is > 512Mb in size, we need to use a different, slower method for
# batching data.
gather_indices_in_python = True if X.nbytes * 1e-9 > 0.5 else False
if landmark_positions is not None:
gather_landmark_indices_in_python = (
True if landmark_positions.nbytes * 1e-9 > 0.5 else False
)
def gather_X(edge_to, edge_from):
# gather data from indexes (edges) in either numpy of tf, depending on array size
if gather_indices_in_python:
edge_to_batch = tf.py_function(gather_index, [X, edge_to], [tf.float32])[0]
edge_from_batch = tf.py_function(
gather_index, [X, edge_from], [tf.float32]
)[0]
else:
edge_to_batch = tf.gather(X, edge_to)
edge_from_batch = tf.gather(X, edge_from)
return edge_to, edge_from, edge_to_batch, edge_from_batch
def get_outputs(edge_to, edge_from, edge_to_batch, edge_from_batch):
outputs = {"umap": ops.repeat(0, batch_size)}
if global_correlation_loss_weight > 0:
outputs["global_correlation"] = edge_to_batch
if parametric_reconstruction:
# add reconstruction to iterator output
# edge_out = ops.concatenate([edge_to_batch, edge_from_batch], axis=0)
outputs["reconstruction"] = edge_to_batch
if landmark_positions is not None:
if gather_landmark_indices_in_python:
outputs["landmark_to"] = tf.py_function(
gather_index, [landmark_positions, edge_to], [tf.float32]
)[0]
else:
# Make sure we explicitly cast landmark_positions to float32,
# as it's user-provided and needs to play nice with loss functions.
outputs["landmark_to"] = tf.gather(landmark_positions, edge_to)
return (edge_to_batch, edge_from_batch), outputs
# get data from graph
_, epochs_per_sample, head, tail, weight, n_vertices = get_graph_elements(
graph_, n_epochs
)
# number of elements per batch for embedding
if batch_size is None:
# batch size can be larger if its just over embeddings
batch_size = int(np.min([n_vertices, 1000]))
edges_to_exp, edges_from_exp = (
np.repeat(head, epochs_per_sample.astype("int")),
np.repeat(tail, epochs_per_sample.astype("int")),
)
# shuffle edges
shuffle_mask = np.random.permutation(range(len(edges_to_exp)))
edges_to_exp = edges_to_exp[shuffle_mask].astype(np.int64)
edges_from_exp = edges_from_exp[shuffle_mask].astype(np.int64)
# create edge iterator
edge_dataset = tf.data.Dataset.from_tensor_slices((edges_to_exp, edges_from_exp))
edge_dataset = edge_dataset.repeat()
edge_dataset = edge_dataset.shuffle(10000)
edge_dataset = edge_dataset.batch(batch_size, drop_remainder=True)
edge_dataset = edge_dataset.map(
gather_X, num_parallel_calls=tf.data.experimental.AUTOTUNE
)
edge_dataset = edge_dataset.map(
get_outputs, num_parallel_calls=tf.data.experimental.AUTOTUNE
)
edge_dataset = edge_dataset.prefetch(10)
return edge_dataset, batch_size, len(edges_to_exp), head, tail, weight | Construct a tf.data.Dataset of edges, sampled by edge weight.
Parameters
----------
X : array, shape (n_samples, n_features)
New data to be transformed.
graph_ : scipy.sparse.csr.csr_matrix
Generated UMAP graph
n_epochs : int
# of epochs to train each edge
batch_size : int
batch size
parametric_reconstruction : bool
Whether the decoder is parametric or non-parametric
landmark_positions : array, shape (n_samples, n_components), optional
The desired position in low-dimensional space of each sample in X.
Points that are not landmarks should have nan coordinates. | construct_edge_dataset | python | lmcinnes/umap | umap/parametric_umap.py | https://github.com/lmcinnes/umap/blob/master/umap/parametric_umap.py | BSD-3-Clause |
def should_pickle(key, val):
"""
Checks if a dictionary item can be pickled
Parameters
----------
key : try
key for dictionary element
val : None
element of dictionary
Returns
-------
picklable: bool
whether the dictionary item can be pickled
"""
try:
## make sure object can be pickled and then re-read
# pickle object
pickled = codecs.encode(pickle.dumps(val), "base64").decode()
# unpickle object
_ = pickle.loads(codecs.decode(pickled.encode(), "base64"))
except (
pickle.PicklingError,
tf.errors.InvalidArgumentError,
TypeError,
tf.errors.InternalError,
tf.errors.NotFoundError,
OverflowError,
TypingError,
AttributeError,
) as e:
warn("Did not pickle {}: {}".format(key, e))
return False
except ValueError as e:
warn(f"Failed at pickling {key}:{val} due to {e}")
return False
return True | Checks if a dictionary item can be pickled
Parameters
----------
key : try
key for dictionary element
val : None
element of dictionary
Returns
-------
picklable: bool
whether the dictionary item can be pickled | should_pickle | python | lmcinnes/umap | umap/parametric_umap.py | https://github.com/lmcinnes/umap/blob/master/umap/parametric_umap.py | BSD-3-Clause |
def load_ParametricUMAP(save_location, verbose=True):
"""
Load a parametric UMAP model consisting of a umap-learn UMAP object
and corresponding keras models.
Parameters
----------
save_location : str
the folder that the model was saved in
verbose : bool, optional
Whether to print the loading steps, by default True
Returns
-------
parametric_umap.ParametricUMAP
Parametric UMAP objects
"""
## Loads a ParametricUMAP model and its related keras models
model_output = os.path.join(save_location, "model.pkl")
model = pickle.load((open(model_output, "rb")))
if verbose:
print("Pickle of ParametricUMAP model loaded from {}".format(model_output))
# load encoder
encoder_output = os.path.join(save_location, "encoder.keras")
if os.path.exists(encoder_output):
model.encoder = keras.models.load_model(encoder_output)
if verbose:
print("Keras encoder model loaded from {}".format(encoder_output))
# save decoder
decoder_output = os.path.join(save_location, "decoder.keras")
if os.path.exists(decoder_output):
model.decoder = keras.models.load_model(decoder_output)
print("Keras decoder model loaded from {}".format(decoder_output))
# save parametric_model
parametric_model_output = os.path.join(save_location, "parametric_model")
if os.path.exists(parametric_model_output):
model.parametric_model = keras.models.load_model(parametric_model_output)
print("Keras full model loaded from {}".format(parametric_model_output))
return model | Load a parametric UMAP model consisting of a umap-learn UMAP object
and corresponding keras models.
Parameters
----------
save_location : str
the folder that the model was saved in
verbose : bool, optional
Whether to print the loading steps, by default True
Returns
-------
parametric_umap.ParametricUMAP
Parametric UMAP objects | load_ParametricUMAP | python | lmcinnes/umap | umap/parametric_umap.py | https://github.com/lmcinnes/umap/blob/master/umap/parametric_umap.py | BSD-3-Clause |
def covariance(x, y=None, keepdims=False):
"""Adapted from TF Probability."""
x = ops.convert_to_tensor(x)
# Covariance *only* uses the centered versions of x (and y).
x = x - ops.mean(x, axis=0, keepdims=True)
if y is None:
y = x
event_axis = ops.mean(x * ops.conj(y), axis=0, keepdims=keepdims)
else:
y = ops.convert_to_tensor(y, dtype=x.dtype)
y = y - ops.mean(y, axis=0, keepdims=True)
event_axis = [len(x.shape) - 1]
sample_axis = [0]
event_axis = ops.cast(event_axis, dtype="int32")
sample_axis = ops.cast(sample_axis, dtype="int32")
x_permed = ops.transpose(x)
y_permed = ops.transpose(y)
n_events = ops.shape(x_permed)[0]
n_samples = ops.shape(x_permed)[1]
# Flatten sample_axis into one long dim.
x_permed_flat = ops.reshape(x_permed, (n_events, n_samples))
y_permed_flat = ops.reshape(y_permed, (n_events, n_samples))
# Do the same for event_axis.
x_permed_flat = ops.reshape(x_permed, (n_events, n_samples))
y_permed_flat = ops.reshape(y_permed, (n_events, n_samples))
# After matmul, cov.shape = batch_shape + [n_events, n_events]
cov = ops.matmul(x_permed_flat, ops.transpose(y_permed_flat)) / ops.cast(
n_samples, x.dtype
)
cov = ops.reshape(
cov,
(n_events**2, 1),
)
# Permuting by the argsort inverts the permutation, making
# cov.shape have ones in the position where there were samples, and
# [n_events * n_events] in the event position.
cov = ops.transpose(cov)
# Now expand event_shape**2 into event_shape + event_shape.
# We here use (for the first time) the fact that we require event_axis to be
# contiguous.
cov = ops.reshape(
cov,
ops.shape(cov)[:1] + (n_events, n_events),
)
if not keepdims:
cov = ops.squeeze(cov, axis=0)
return cov | Adapted from TF Probability. | covariance | python | lmcinnes/umap | umap/parametric_umap.py | https://github.com/lmcinnes/umap/blob/master/umap/parametric_umap.py | BSD-3-Clause |
def __init__(self, indim, outdim):
super(PumapNet, self).__init__()
self.dense1 = nn.Linear(indim, 100)
self.dense2 = nn.Linear(100, 100)
self.dense3 = nn.Linear(100, 100)
self.dense4 = nn.Linear(100, outdim)
"""
Creates the same network as the one used by parametric UMAP.
Note: shape of network is fixed.
Parameters
----------
indim : int
dimension of input to network.
outdim : int
dimension of output of network.
""" | Creates the same network as the one used by parametric UMAP.
Note: shape of network is fixed.
Parameters
----------
indim : int
dimension of input to network.
outdim : int
dimension of output of network. | __init__ | python | lmcinnes/umap | umap/parametric_umap.py | https://github.com/lmcinnes/umap/blob/master/umap/parametric_umap.py | BSD-3-Clause |
def weight_copier(km, pm):
"""Copies weights from a parametric UMAP encoder to pytorch.
Parameters
----------
km : encoder extracted from parametric UMAP.
pm: a PumapNet object. Will be overwritten.
Returns
-------
pm : PumapNet Object.
Net with copied weights.
"""
kweights = km.get_weights()
n_layers = int(len(kweights) / 2) # The actual number of layers
# Get the names of the pytorch layers
all_keys = [x for x in pm.state_dict().keys()]
pm_names = [all_keys[2 * i].split(".")[0] for i in range(4)]
# Set a variable for the state dict
pyt_state_dict = pm.state_dict()
for i in range(n_layers):
pyt_state_dict[pm_names[i] + ".bias"] = kweights[2 * i + 1]
pyt_state_dict[pm_names[i] + ".weight"] = np.transpose(kweights[2 * i])
for key in pyt_state_dict.keys():
pyt_state_dict[key] = torch.from_numpy(pyt_state_dict[key])
# Update
pm.load_state_dict(pyt_state_dict)
return pm | Copies weights from a parametric UMAP encoder to pytorch.
Parameters
----------
km : encoder extracted from parametric UMAP.
pm: a PumapNet object. Will be overwritten.
Returns
-------
pm : PumapNet Object.
Net with copied weights. | weight_copier | python | lmcinnes/umap | umap/parametric_umap.py | https://github.com/lmcinnes/umap/blob/master/umap/parametric_umap.py | BSD-3-Clause |
def __init__(self, **kwds):
warn(
"""The umap.parametric_umap package requires Tensorflow > 2.0 to be installed.
You can install Tensorflow at https://www.tensorflow.org/install
or you can install the CPU version of Tensorflow using
pip install umap-learn[parametric_umap]
"""
)
raise ImportError(
"umap.parametric_umap requires Tensorflow >= 2.0"
) from None | The umap.parametric_umap package requires Tensorflow > 2.0 to be installed.
You can install Tensorflow at https://www.tensorflow.org/install
or you can install the CPU version of Tensorflow using
pip install umap-learn[parametric_umap] | __init__ | python | lmcinnes/umap | umap/__init__.py | https://github.com/lmcinnes/umap/blob/master/umap/__init__.py | BSD-3-Clause |
def test_umap_transform_embedding_stability(iris, iris_subset_model, iris_selection):
"""Test that transforming data does not alter the learned embeddings
Issue #217 describes how using transform to embed new data using a
trained UMAP transformer causes the fitting embedding matrix to change
in cases when the new data has the same number of rows as the original
training data.
"""
data = iris.data[iris_selection]
fitter = iris_subset_model
original_embedding = fitter.embedding_.copy()
# The important point is that the new data has the same number of rows
# as the original fit data
new_data = np.random.random(data.shape)
_ = fitter.transform(new_data)
assert_array_equal(
original_embedding,
fitter.embedding_,
"Transforming new data changed the original embeddings",
)
# Example from issue #217
a = np.random.random((100, 10))
b = np.random.random((100, 5))
umap = UMAP(n_epochs=100)
u1 = umap.fit_transform(a[:, :5])
u1_orig = u1.copy()
assert_array_equal(u1_orig, umap.embedding_)
_ = umap.transform(b)
assert_array_equal(u1_orig, umap.embedding_) | Test that transforming data does not alter the learned embeddings
Issue #217 describes how using transform to embed new data using a
trained UMAP transformer causes the fitting embedding matrix to change
in cases when the new data has the same number of rows as the original
training data. | test_umap_transform_embedding_stability | python | lmcinnes/umap | umap/tests/test_umap_ops.py | https://github.com/lmcinnes/umap/blob/master/umap/tests/test_umap_ops.py | BSD-3-Clause |
def test_check_input_data(all_finite_data, inverse_data):
"""
Data input to UMAP gets checked for liability.
This tests checks the if data input is dismissed/accepted
according to the "ensure_all_finite" keyword as used by
sklearn.
Parameters
----------
all_finite_data
inverse_data
-------
"""
inf_data = all_finite_data.copy()
inf_data[0] = np.inf
nan_data = all_finite_data.copy()
nan_data[0] = np.nan
inf_nan_data = all_finite_data.copy()
inf_nan_data[0] = np.nan
inf_nan_data[1] = np.inf
# wrapper to call each data handling function of UMAP in a convenient way
def call_umap_functions(data, ensure_all_finite):
u = UMAP(metric=nan_dist)
if ensure_all_finite is None:
u.fit_transform(data)
u.fit(data)
u.transform(data)
u.update(data)
u.inverse_transform(inverse_data)
else:
u.fit_transform(data, ensure_all_finite=ensure_all_finite)
u.fit(data, ensure_all_finite=ensure_all_finite)
u.transform(data, ensure_all_finite=ensure_all_finite)
u.update(data, ensure_all_finite=ensure_all_finite)
u.inverse_transform(inverse_data)
# Check whether correct data input is accepted
call_umap_functions(all_finite_data, None)
call_umap_functions(all_finite_data, True)
call_umap_functions(nan_data, "allow-nan")
call_umap_functions(all_finite_data, "allow-nan")
call_umap_functions(inf_data, False)
call_umap_functions(inf_nan_data, False)
call_umap_functions(nan_data, False)
call_umap_functions(all_finite_data, False)
# Check whether illegal data raises a ValueError
with pytest.raises(ValueError):
call_umap_functions(nan_data, None)
call_umap_functions(inf_data, None)
call_umap_functions(inf_nan_data, None)
call_umap_functions(nan_data, True)
call_umap_functions(inf_data, True)
call_umap_functions(inf_nan_data, True)
call_umap_functions(inf_data, "allow-nan")
call_umap_functions(inf_nan_data, "allow-nan") | Data input to UMAP gets checked for liability.
This tests checks the if data input is dismissed/accepted
according to the "ensure_all_finite" keyword as used by
sklearn.
Parameters
----------
all_finite_data
inverse_data
------- | test_check_input_data | python | lmcinnes/umap | umap/tests/test_data_input.py | https://github.com/lmcinnes/umap/blob/master/umap/tests/test_data_input.py | BSD-3-Clause |
def test_create_model(moon_dataset):
"""test a simple parametric UMAP network"""
embedder = ParametricUMAP()
embedding = embedder.fit_transform(moon_dataset)
# completes successfully
assert embedding is not None
assert embedding.shape == (moon_dataset.shape[0], 2) | test a simple parametric UMAP network | test_create_model | python | lmcinnes/umap | umap/tests/test_parametric_umap.py | https://github.com/lmcinnes/umap/blob/master/umap/tests/test_parametric_umap.py | BSD-3-Clause |
def test_global_loss(moon_dataset):
"""test a simple parametric UMAP network"""
embedder = ParametricUMAP(global_correlation_loss_weight=1.0)
embedding = embedder.fit_transform(moon_dataset)
# completes successfully
assert embedding is not None
assert embedding.shape == (moon_dataset.shape[0], 2) | test a simple parametric UMAP network | test_global_loss | python | lmcinnes/umap | umap/tests/test_parametric_umap.py | https://github.com/lmcinnes/umap/blob/master/umap/tests/test_parametric_umap.py | BSD-3-Clause |
def test_inverse_transform(moon_dataset):
"""tests inverse_transform"""
def norm(x):
return (x - np.min(x)) / (np.max(x) - np.min(x))
X = norm(moon_dataset)
embedder = ParametricUMAP(parametric_reconstruction=True)
Z = embedder.fit_transform(X)
X_r = embedder.inverse_transform(Z)
# completes successfully
assert X_r is not None
assert X_r.shape == X.shape | tests inverse_transform | test_inverse_transform | python | lmcinnes/umap | umap/tests/test_parametric_umap.py | https://github.com/lmcinnes/umap/blob/master/umap/tests/test_parametric_umap.py | BSD-3-Clause |
def test_custom_encoder_decoder(moon_dataset):
"""test using a custom encoder / decoder"""
dims = (2,)
n_components = 2
encoder = tf.keras.Sequential(
[
tf.keras.layers.Input(shape=dims),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units=100, activation="relu"),
tf.keras.layers.Dense(units=100, activation="relu"),
tf.keras.layers.Dense(units=100, activation="relu"),
tf.keras.layers.Dense(units=n_components, name="z"),
]
)
decoder = tf.keras.Sequential(
[
tf.keras.layers.Input(shape=(n_components,)),
tf.keras.layers.Dense(units=100, activation="relu"),
tf.keras.layers.Dense(units=100, activation="relu"),
tf.keras.layers.Dense(units=100, activation="relu"),
tf.keras.layers.Dense(
units=np.prod(dims), name="recon", activation=None
),
tf.keras.layers.Reshape(dims),
]
)
embedder = ParametricUMAP(
encoder=encoder,
decoder=decoder,
dims=dims,
parametric_reconstruction=True,
verbose=True,
)
embedding = embedder.fit_transform(moon_dataset)
# completes successfully
assert embedding is not None
assert embedding.shape == (moon_dataset.shape[0], 2) | test using a custom encoder / decoder | test_custom_encoder_decoder | python | lmcinnes/umap | umap/tests/test_parametric_umap.py | https://github.com/lmcinnes/umap/blob/master/umap/tests/test_parametric_umap.py | BSD-3-Clause |
def test_validation(moon_dataset):
"""tests adding a validation dataset"""
X_train, X_valid = train_test_split(moon_dataset, train_size=0.5)
embedder = ParametricUMAP(
parametric_reconstruction=True, reconstruction_validation=X_valid, verbose=True
)
embedding = embedder.fit_transform(X_train)
# completes successfully
assert embedding is not None
assert embedding.shape == (X_train.shape[0], 2) | tests adding a validation dataset | test_validation | python | lmcinnes/umap | umap/tests/test_parametric_umap.py | https://github.com/lmcinnes/umap/blob/master/umap/tests/test_parametric_umap.py | BSD-3-Clause |
def test_save_load(moon_dataset):
"""tests saving and loading"""
embedder = ParametricUMAP()
embedding = embedder.fit_transform(moon_dataset)
# completes successfully
assert embedding is not None
assert embedding.shape == (moon_dataset.shape[0], 2)
# Portable tempfile
model_path = tempfile.mkdtemp(suffix="_umap_model")
embedder.save(model_path)
loaded_model = load_ParametricUMAP(model_path)
assert loaded_model is not None
loaded_embedding = loaded_model.transform(moon_dataset)
assert_array_almost_equal(
embedding,
loaded_embedding,
decimal=5,
err_msg="Loaded model transform fails to match original embedding",
) | tests saving and loading | test_save_load | python | lmcinnes/umap | umap/tests/test_parametric_umap.py | https://github.com/lmcinnes/umap/blob/master/umap/tests/test_parametric_umap.py | BSD-3-Clause |
def run_test_metric(metric, test_data, dist_matrix, with_grad=False):
"""Core utility function to test target metric on test data"""
if with_grad:
dist_function = dist.named_distances_with_gradients[metric]
else:
dist_function = dist.named_distances[metric]
sample_size = test_data.shape[0]
test_matrix = [
[dist_function(test_data[i], test_data[j]) for j in range(sample_size)]
for i in range(sample_size)
]
if with_grad:
test_matrix = [d for pairs in test_matrix for d, grad in pairs]
test_matrix = np.array(test_matrix).reshape(sample_size, sample_size)
assert_array_almost_equal(
test_matrix,
dist_matrix,
err_msg="Distances don't match " "for metric {}".format(metric),
) | Core utility function to test target metric on test data | run_test_metric | python | lmcinnes/umap | umap/tests/test_umap_metrics.py | https://github.com/lmcinnes/umap/blob/master/umap/tests/test_umap_metrics.py | BSD-3-Clause |
def run_test_sparse_metric(metric, sparse_test_data, dist_matrix):
"""Core utility function to run test of target metric on sparse data"""
dist_function = spdist.sparse_named_distances[metric]
if metric in spdist.sparse_need_n_features:
test_matrix = np.array(
[
[
dist_function(
sparse_test_data[i].indices,
sparse_test_data[i].data,
sparse_test_data[j].indices,
sparse_test_data[j].data,
sparse_test_data.shape[1],
)
for j in range(sparse_test_data.shape[0])
]
for i in range(sparse_test_data.shape[0])
]
)
else:
test_matrix = np.array(
[
[
dist_function(
sparse_test_data[i].indices,
sparse_test_data[i].data,
sparse_test_data[j].indices,
sparse_test_data[j].data,
)
for j in range(sparse_test_data.shape[0])
]
for i in range(sparse_test_data.shape[0])
]
)
assert_array_almost_equal(
test_matrix,
dist_matrix,
err_msg="Sparse distances don't match " "for metric {}".format(metric),
) | Core utility function to run test of target metric on sparse data | run_test_sparse_metric | python | lmcinnes/umap | umap/tests/test_umap_metrics.py | https://github.com/lmcinnes/umap/blob/master/umap/tests/test_umap_metrics.py | BSD-3-Clause |
def torus_euclidean_grad(x, y, torus_dimensions=(2 * np.pi, 2 * np.pi)):
"""Standard euclidean distance.
..math::
D(x, y) = \sqrt{\sum_i (x_i - y_i)^2}
"""
distance_sqr = 0.0
g = np.zeros_like(x)
for i in range(x.shape[0]):
a = abs(x[i] - y[i])
if 2 * a < torus_dimensions[i]:
distance_sqr += a**2
g[i] = x[i] - y[i]
else:
distance_sqr += (torus_dimensions[i] - a) ** 2
g[i] = (x[i] - y[i]) * (a - torus_dimensions[i]) / a
distance = np.sqrt(distance_sqr)
return distance, g / (1e-6 + distance) | Standard euclidean distance.
..math::
D(x, y) = \sqrt{\sum_i (x_i - y_i)^2} | torus_euclidean_grad | python | lmcinnes/umap | examples/mnist_torus_sphere_example.py | https://github.com/lmcinnes/umap/blob/master/examples/mnist_torus_sphere_example.py | BSD-3-Clause |
def window_partition(x, window_size):
"""
Args:
x: (B, H, W, C)
window_size (int): window size
Returns:
windows: (num_windows*B, window_size, window_size, C)
"""
B, H, W, C = x.shape
x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
return windows | Args:
x: (B, H, W, C)
window_size (int): window size
Returns:
windows: (num_windows*B, window_size, window_size, C) | window_partition | python | JingyunLiang/SwinIR | models/network_swinir.py | https://github.com/JingyunLiang/SwinIR/blob/master/models/network_swinir.py | Apache-2.0 |
def window_reverse(windows, window_size, H, W):
"""
Args:
windows: (num_windows*B, window_size, window_size, C)
window_size (int): Window size
H (int): Height of image
W (int): Width of image
Returns:
x: (B, H, W, C)
"""
B = int(windows.shape[0] / (H * W / window_size / window_size))
x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
return x | Args:
windows: (num_windows*B, window_size, window_size, C)
window_size (int): Window size
H (int): Height of image
W (int): Width of image
Returns:
x: (B, H, W, C) | window_reverse | python | JingyunLiang/SwinIR | models/network_swinir.py | https://github.com/JingyunLiang/SwinIR/blob/master/models/network_swinir.py | Apache-2.0 |
def forward(self, x, mask=None):
"""
Args:
x: input features with shape of (num_windows*B, N, C)
mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
"""
B_, N, C = x.shape
qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
q = q * self.scale
attn = (q @ k.transpose(-2, -1))
relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH
relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
attn = attn + relative_position_bias.unsqueeze(0)
if mask is not None:
nW = mask.shape[0]
attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
attn = attn.view(-1, self.num_heads, N, N)
attn = self.softmax(attn)
else:
attn = self.softmax(attn)
attn = self.attn_drop(attn)
x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
x = self.proj(x)
x = self.proj_drop(x)
return x | Args:
x: input features with shape of (num_windows*B, N, C)
mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None | forward | python | JingyunLiang/SwinIR | models/network_swinir.py | https://github.com/JingyunLiang/SwinIR/blob/master/models/network_swinir.py | Apache-2.0 |
def forward(self, x):
"""
x: B, H*W, C
"""
H, W = self.input_resolution
B, L, C = x.shape
assert L == H * W, "input feature has wrong size"
assert H % 2 == 0 and W % 2 == 0, f"x size ({H}*{W}) are not even."
x = x.view(B, H, W, C)
x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
x = self.norm(x)
x = self.reduction(x)
return x | x: B, H*W, C | forward | python | JingyunLiang/SwinIR | models/network_swinir.py | https://github.com/JingyunLiang/SwinIR/blob/master/models/network_swinir.py | Apache-2.0 |
def calculate_psnr(img1, img2, crop_border, input_order='HWC', test_y_channel=False):
"""Calculate PSNR (Peak Signal-to-Noise Ratio).
Ref: https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio
Args:
img1 (ndarray): Images with range [0, 255].
img2 (ndarray): Images with range [0, 255].
crop_border (int): Cropped pixels in each edge of an image. These
pixels are not involved in the PSNR calculation.
input_order (str): Whether the input order is 'HWC' or 'CHW'.
Default: 'HWC'.
test_y_channel (bool): Test on Y channel of YCbCr. Default: False.
Returns:
float: psnr result.
"""
assert img1.shape == img2.shape, (f'Image shapes are differnet: {img1.shape}, {img2.shape}.')
if input_order not in ['HWC', 'CHW']:
raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' '"HWC" and "CHW"')
img1 = reorder_image(img1, input_order=input_order)
img2 = reorder_image(img2, input_order=input_order)
img1 = img1.astype(np.float64)
img2 = img2.astype(np.float64)
if crop_border != 0:
img1 = img1[crop_border:-crop_border, crop_border:-crop_border, ...]
img2 = img2[crop_border:-crop_border, crop_border:-crop_border, ...]
if test_y_channel:
img1 = to_y_channel(img1)
img2 = to_y_channel(img2)
mse = np.mean((img1 - img2) ** 2)
if mse == 0:
return float('inf')
return 20. * np.log10(255. / np.sqrt(mse)) | Calculate PSNR (Peak Signal-to-Noise Ratio).
Ref: https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio
Args:
img1 (ndarray): Images with range [0, 255].
img2 (ndarray): Images with range [0, 255].
crop_border (int): Cropped pixels in each edge of an image. These
pixels are not involved in the PSNR calculation.
input_order (str): Whether the input order is 'HWC' or 'CHW'.
Default: 'HWC'.
test_y_channel (bool): Test on Y channel of YCbCr. Default: False.
Returns:
float: psnr result. | calculate_psnr | python | JingyunLiang/SwinIR | utils/util_calculate_psnr_ssim.py | https://github.com/JingyunLiang/SwinIR/blob/master/utils/util_calculate_psnr_ssim.py | Apache-2.0 |
def _ssim(img1, img2):
"""Calculate SSIM (structural similarity) for one channel images.
It is called by func:`calculate_ssim`.
Args:
img1 (ndarray): Images with range [0, 255] with order 'HWC'.
img2 (ndarray): Images with range [0, 255] with order 'HWC'.
Returns:
float: ssim result.
"""
C1 = (0.01 * 255) ** 2
C2 = (0.03 * 255) ** 2
img1 = img1.astype(np.float64)
img2 = img2.astype(np.float64)
kernel = cv2.getGaussianKernel(11, 1.5)
window = np.outer(kernel, kernel.transpose())
mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5]
mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5]
mu1_sq = mu1 ** 2
mu2_sq = mu2 ** 2
mu1_mu2 = mu1 * mu2
sigma1_sq = cv2.filter2D(img1 ** 2, -1, window)[5:-5, 5:-5] - mu1_sq
sigma2_sq = cv2.filter2D(img2 ** 2, -1, window)[5:-5, 5:-5] - mu2_sq
sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2
ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * (sigma1_sq + sigma2_sq + C2))
return ssim_map.mean() | Calculate SSIM (structural similarity) for one channel images.
It is called by func:`calculate_ssim`.
Args:
img1 (ndarray): Images with range [0, 255] with order 'HWC'.
img2 (ndarray): Images with range [0, 255] with order 'HWC'.
Returns:
float: ssim result. | _ssim | python | JingyunLiang/SwinIR | utils/util_calculate_psnr_ssim.py | https://github.com/JingyunLiang/SwinIR/blob/master/utils/util_calculate_psnr_ssim.py | Apache-2.0 |
def calculate_ssim(img1, img2, crop_border, input_order='HWC', test_y_channel=False):
"""Calculate SSIM (structural similarity).
Ref:
Image quality assessment: From error visibility to structural similarity
The results are the same as that of the official released MATLAB code in
https://ece.uwaterloo.ca/~z70wang/research/ssim/.
For three-channel images, SSIM is calculated for each channel and then
averaged.
Args:
img1 (ndarray): Images with range [0, 255].
img2 (ndarray): Images with range [0, 255].
crop_border (int): Cropped pixels in each edge of an image. These
pixels are not involved in the SSIM calculation.
input_order (str): Whether the input order is 'HWC' or 'CHW'.
Default: 'HWC'.
test_y_channel (bool): Test on Y channel of YCbCr. Default: False.
Returns:
float: ssim result.
"""
assert img1.shape == img2.shape, (f'Image shapes are differnet: {img1.shape}, {img2.shape}.')
if input_order not in ['HWC', 'CHW']:
raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' '"HWC" and "CHW"')
img1 = reorder_image(img1, input_order=input_order)
img2 = reorder_image(img2, input_order=input_order)
img1 = img1.astype(np.float64)
img2 = img2.astype(np.float64)
if crop_border != 0:
img1 = img1[crop_border:-crop_border, crop_border:-crop_border, ...]
img2 = img2[crop_border:-crop_border, crop_border:-crop_border, ...]
if test_y_channel:
img1 = to_y_channel(img1)
img2 = to_y_channel(img2)
ssims = []
for i in range(img1.shape[2]):
ssims.append(_ssim(img1[..., i], img2[..., i]))
return np.array(ssims).mean() | Calculate SSIM (structural similarity).
Ref:
Image quality assessment: From error visibility to structural similarity
The results are the same as that of the official released MATLAB code in
https://ece.uwaterloo.ca/~z70wang/research/ssim/.
For three-channel images, SSIM is calculated for each channel and then
averaged.
Args:
img1 (ndarray): Images with range [0, 255].
img2 (ndarray): Images with range [0, 255].
crop_border (int): Cropped pixels in each edge of an image. These
pixels are not involved in the SSIM calculation.
input_order (str): Whether the input order is 'HWC' or 'CHW'.
Default: 'HWC'.
test_y_channel (bool): Test on Y channel of YCbCr. Default: False.
Returns:
float: ssim result. | calculate_ssim | python | JingyunLiang/SwinIR | utils/util_calculate_psnr_ssim.py | https://github.com/JingyunLiang/SwinIR/blob/master/utils/util_calculate_psnr_ssim.py | Apache-2.0 |
def calculate_psnrb(img1, img2, crop_border, input_order='HWC', test_y_channel=False):
"""Calculate PSNR-B (Peak Signal-to-Noise Ratio).
Ref: Quality assessment of deblocked images, for JPEG image deblocking evaluation
# https://gitlab.com/Queuecumber/quantization-guided-ac/-/blob/master/metrics/psnrb.py
Args:
img1 (ndarray): Images with range [0, 255].
img2 (ndarray): Images with range [0, 255].
crop_border (int): Cropped pixels in each edge of an image. These
pixels are not involved in the PSNR calculation.
input_order (str): Whether the input order is 'HWC' or 'CHW'.
Default: 'HWC'.
test_y_channel (bool): Test on Y channel of YCbCr. Default: False.
Returns:
float: psnr result.
"""
assert img1.shape == img2.shape, (f'Image shapes are differnet: {img1.shape}, {img2.shape}.')
if input_order not in ['HWC', 'CHW']:
raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' '"HWC" and "CHW"')
img1 = reorder_image(img1, input_order=input_order)
img2 = reorder_image(img2, input_order=input_order)
img1 = img1.astype(np.float64)
img2 = img2.astype(np.float64)
if crop_border != 0:
img1 = img1[crop_border:-crop_border, crop_border:-crop_border, ...]
img2 = img2[crop_border:-crop_border, crop_border:-crop_border, ...]
if test_y_channel:
img1 = to_y_channel(img1)
img2 = to_y_channel(img2)
# follow https://gitlab.com/Queuecumber/quantization-guided-ac/-/blob/master/metrics/psnrb.py
img1 = torch.from_numpy(img1).permute(2, 0, 1).unsqueeze(0) / 255.
img2 = torch.from_numpy(img2).permute(2, 0, 1).unsqueeze(0) / 255.
total = 0
for c in range(img1.shape[1]):
mse = torch.nn.functional.mse_loss(img1[:, c:c + 1, :, :], img2[:, c:c + 1, :, :], reduction='none')
bef = _blocking_effect_factor(img1[:, c:c + 1, :, :])
mse = mse.view(mse.shape[0], -1).mean(1)
total += 10 * torch.log10(1 / (mse + bef))
return float(total) / img1.shape[1] | Calculate PSNR-B (Peak Signal-to-Noise Ratio).
Ref: Quality assessment of deblocked images, for JPEG image deblocking evaluation
# https://gitlab.com/Queuecumber/quantization-guided-ac/-/blob/master/metrics/psnrb.py
Args:
img1 (ndarray): Images with range [0, 255].
img2 (ndarray): Images with range [0, 255].
crop_border (int): Cropped pixels in each edge of an image. These
pixels are not involved in the PSNR calculation.
input_order (str): Whether the input order is 'HWC' or 'CHW'.
Default: 'HWC'.
test_y_channel (bool): Test on Y channel of YCbCr. Default: False.
Returns:
float: psnr result. | calculate_psnrb | python | JingyunLiang/SwinIR | utils/util_calculate_psnr_ssim.py | https://github.com/JingyunLiang/SwinIR/blob/master/utils/util_calculate_psnr_ssim.py | Apache-2.0 |
def reorder_image(img, input_order='HWC'):
"""Reorder images to 'HWC' order.
If the input_order is (h, w), return (h, w, 1);
If the input_order is (c, h, w), return (h, w, c);
If the input_order is (h, w, c), return as it is.
Args:
img (ndarray): Input image.
input_order (str): Whether the input order is 'HWC' or 'CHW'.
If the input image shape is (h, w), input_order will not have
effects. Default: 'HWC'.
Returns:
ndarray: reordered image.
"""
if input_order not in ['HWC', 'CHW']:
raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' "'HWC' and 'CHW'")
if len(img.shape) == 2:
img = img[..., None]
if input_order == 'CHW':
img = img.transpose(1, 2, 0)
return img | Reorder images to 'HWC' order.
If the input_order is (h, w), return (h, w, 1);
If the input_order is (c, h, w), return (h, w, c);
If the input_order is (h, w, c), return as it is.
Args:
img (ndarray): Input image.
input_order (str): Whether the input order is 'HWC' or 'CHW'.
If the input image shape is (h, w), input_order will not have
effects. Default: 'HWC'.
Returns:
ndarray: reordered image. | reorder_image | python | JingyunLiang/SwinIR | utils/util_calculate_psnr_ssim.py | https://github.com/JingyunLiang/SwinIR/blob/master/utils/util_calculate_psnr_ssim.py | Apache-2.0 |
def to_y_channel(img):
"""Change to Y channel of YCbCr.
Args:
img (ndarray): Images with range [0, 255].
Returns:
(ndarray): Images with range [0, 255] (float type) without round.
"""
img = img.astype(np.float32) / 255.
if img.ndim == 3 and img.shape[2] == 3:
img = bgr2ycbcr(img, y_only=True)
img = img[..., None]
return img * 255. | Change to Y channel of YCbCr.
Args:
img (ndarray): Images with range [0, 255].
Returns:
(ndarray): Images with range [0, 255] (float type) without round. | to_y_channel | python | JingyunLiang/SwinIR | utils/util_calculate_psnr_ssim.py | https://github.com/JingyunLiang/SwinIR/blob/master/utils/util_calculate_psnr_ssim.py | Apache-2.0 |
def _convert_input_type_range(img):
"""Convert the type and range of the input image.
It converts the input image to np.float32 type and range of [0, 1].
It is mainly used for pre-processing the input image in colorspace
convertion functions such as rgb2ycbcr and ycbcr2rgb.
Args:
img (ndarray): The input image. It accepts:
1. np.uint8 type with range [0, 255];
2. np.float32 type with range [0, 1].
Returns:
(ndarray): The converted image with type of np.float32 and range of
[0, 1].
"""
img_type = img.dtype
img = img.astype(np.float32)
if img_type == np.float32:
pass
elif img_type == np.uint8:
img /= 255.
else:
raise TypeError('The img type should be np.float32 or np.uint8, ' f'but got {img_type}')
return img | Convert the type and range of the input image.
It converts the input image to np.float32 type and range of [0, 1].
It is mainly used for pre-processing the input image in colorspace
convertion functions such as rgb2ycbcr and ycbcr2rgb.
Args:
img (ndarray): The input image. It accepts:
1. np.uint8 type with range [0, 255];
2. np.float32 type with range [0, 1].
Returns:
(ndarray): The converted image with type of np.float32 and range of
[0, 1]. | _convert_input_type_range | python | JingyunLiang/SwinIR | utils/util_calculate_psnr_ssim.py | https://github.com/JingyunLiang/SwinIR/blob/master/utils/util_calculate_psnr_ssim.py | Apache-2.0 |
def _convert_output_type_range(img, dst_type):
"""Convert the type and range of the image according to dst_type.
It converts the image to desired type and range. If `dst_type` is np.uint8,
images will be converted to np.uint8 type with range [0, 255]. If
`dst_type` is np.float32, it converts the image to np.float32 type with
range [0, 1].
It is mainly used for post-processing images in colorspace convertion
functions such as rgb2ycbcr and ycbcr2rgb.
Args:
img (ndarray): The image to be converted with np.float32 type and
range [0, 255].
dst_type (np.uint8 | np.float32): If dst_type is np.uint8, it
converts the image to np.uint8 type with range [0, 255]. If
dst_type is np.float32, it converts the image to np.float32 type
with range [0, 1].
Returns:
(ndarray): The converted image with desired type and range.
"""
if dst_type not in (np.uint8, np.float32):
raise TypeError('The dst_type should be np.float32 or np.uint8, ' f'but got {dst_type}')
if dst_type == np.uint8:
img = img.round()
else:
img /= 255.
return img.astype(dst_type) | Convert the type and range of the image according to dst_type.
It converts the image to desired type and range. If `dst_type` is np.uint8,
images will be converted to np.uint8 type with range [0, 255]. If
`dst_type` is np.float32, it converts the image to np.float32 type with
range [0, 1].
It is mainly used for post-processing images in colorspace convertion
functions such as rgb2ycbcr and ycbcr2rgb.
Args:
img (ndarray): The image to be converted with np.float32 type and
range [0, 255].
dst_type (np.uint8 | np.float32): If dst_type is np.uint8, it
converts the image to np.uint8 type with range [0, 255]. If
dst_type is np.float32, it converts the image to np.float32 type
with range [0, 1].
Returns:
(ndarray): The converted image with desired type and range. | _convert_output_type_range | python | JingyunLiang/SwinIR | utils/util_calculate_psnr_ssim.py | https://github.com/JingyunLiang/SwinIR/blob/master/utils/util_calculate_psnr_ssim.py | Apache-2.0 |
def bgr2ycbcr(img, y_only=False):
"""Convert a BGR image to YCbCr image.
The bgr version of rgb2ycbcr.
It implements the ITU-R BT.601 conversion for standard-definition
television. See more details in
https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion.
It differs from a similar function in cv2.cvtColor: `BGR <-> YCrCb`.
In OpenCV, it implements a JPEG conversion. See more details in
https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion.
Args:
img (ndarray): The input image. It accepts:
1. np.uint8 type with range [0, 255];
2. np.float32 type with range [0, 1].
y_only (bool): Whether to only return Y channel. Default: False.
Returns:
ndarray: The converted YCbCr image. The output image has the same type
and range as input image.
"""
img_type = img.dtype
img = _convert_input_type_range(img)
if y_only:
out_img = np.dot(img, [24.966, 128.553, 65.481]) + 16.0
else:
out_img = np.matmul(
img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786], [65.481, -37.797, 112.0]]) + [16, 128, 128]
out_img = _convert_output_type_range(out_img, img_type)
return out_img | Convert a BGR image to YCbCr image.
The bgr version of rgb2ycbcr.
It implements the ITU-R BT.601 conversion for standard-definition
television. See more details in
https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion.
It differs from a similar function in cv2.cvtColor: `BGR <-> YCrCb`.
In OpenCV, it implements a JPEG conversion. See more details in
https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion.
Args:
img (ndarray): The input image. It accepts:
1. np.uint8 type with range [0, 255];
2. np.float32 type with range [0, 1].
y_only (bool): Whether to only return Y channel. Default: False.
Returns:
ndarray: The converted YCbCr image. The output image has the same type
and range as input image. | bgr2ycbcr | python | JingyunLiang/SwinIR | utils/util_calculate_psnr_ssim.py | https://github.com/JingyunLiang/SwinIR/blob/master/utils/util_calculate_psnr_ssim.py | Apache-2.0 |
def memoize(_func=None, *, key_func=None):
"""@memoize(key_func=None). Makes decorated function memoize its results.
If key_func is specified uses key_func(*func_args, **func_kwargs) as memory key.
Otherwise uses args + tuple(sorted(kwargs.items()))
Exposes its memory via .memory attribute.
"""
if _func is not None:
return memoize()(_func)
return _memory_decorator({}, key_func) | @memoize(key_func=None). Makes decorated function memoize its results.
If key_func is specified uses key_func(*func_args, **func_kwargs) as memory key.
Otherwise uses args + tuple(sorted(kwargs.items()))
Exposes its memory via .memory attribute. | memoize | python | Suor/funcy | funcy/calc.py | https://github.com/Suor/funcy/blob/master/funcy/calc.py | BSD-3-Clause |
def cache(timeout, *, key_func=None):
"""Caches a function results for timeout seconds."""
if isinstance(timeout, timedelta):
timeout = timeout.total_seconds()
return _memory_decorator(CacheMemory(timeout), key_func) | Caches a function results for timeout seconds. | cache | python | Suor/funcy | funcy/calc.py | https://github.com/Suor/funcy/blob/master/funcy/calc.py | BSD-3-Clause |
def make_lookuper(func):
"""
Creates a single argument function looking up result in a memory.
Decorated function is called once on first lookup and should return all available
arg-value pairs.
Resulting function will raise LookupError when using @make_lookuper
or simply return None when using @silent_lookuper.
"""
has_args, has_keys = has_arg_types(func)
assert not has_keys, \
'Lookup table building function should not have keyword arguments'
if has_args:
@memoize
def wrapper(*args):
f = lambda: func(*args)
f.__name__ = '%s(%s)' % (func.__name__, ', '.join(map(str, args)))
return make_lookuper(f)
else:
memory = {}
def wrapper(arg):
if not memory:
memory[object()] = None # prevent continuos memory refilling
memory.update(func())
if silent:
return memory.get(arg)
elif arg in memory:
return memory[arg]
else:
raise LookupError("Failed to look up %s(%s)" % (func.__name__, arg))
return wraps(func)(wrapper) | Creates a single argument function looking up result in a memory.
Decorated function is called once on first lookup and should return all available
arg-value pairs.
Resulting function will raise LookupError when using @make_lookuper
or simply return None when using @silent_lookuper. | _make_lookuper.make_lookuper | python | Suor/funcy | funcy/calc.py | https://github.com/Suor/funcy/blob/master/funcy/calc.py | BSD-3-Clause |
def _make_lookuper(silent):
def make_lookuper(func):
"""
Creates a single argument function looking up result in a memory.
Decorated function is called once on first lookup and should return all available
arg-value pairs.
Resulting function will raise LookupError when using @make_lookuper
or simply return None when using @silent_lookuper.
"""
has_args, has_keys = has_arg_types(func)
assert not has_keys, \
'Lookup table building function should not have keyword arguments'
if has_args:
@memoize
def wrapper(*args):
f = lambda: func(*args)
f.__name__ = '%s(%s)' % (func.__name__, ', '.join(map(str, args)))
return make_lookuper(f)
else:
memory = {}
def wrapper(arg):
if not memory:
memory[object()] = None # prevent continuos memory refilling
memory.update(func())
if silent:
return memory.get(arg)
elif arg in memory:
return memory[arg]
else:
raise LookupError("Failed to look up %s(%s)" % (func.__name__, arg))
return wraps(func)(wrapper)
return make_lookuper | Creates a single argument function looking up result in a memory.
Decorated function is called once on first lookup and should return all available
arg-value pairs.
Resulting function will raise LookupError when using @make_lookuper
or simply return None when using @silent_lookuper. | _make_lookuper | python | Suor/funcy | funcy/calc.py | https://github.com/Suor/funcy/blob/master/funcy/calc.py | BSD-3-Clause |
def re_iter(regex, s, flags=0):
"""Iterates over matches of regex in s, presents them in simplest possible form"""
regex, getter = _prepare(regex, flags)
return map(getter, regex.finditer(s)) | Iterates over matches of regex in s, presents them in simplest possible form | re_iter | python | Suor/funcy | funcy/strings.py | https://github.com/Suor/funcy/blob/master/funcy/strings.py | BSD-3-Clause |
def re_all(regex, s, flags=0):
"""Lists all matches of regex in s, presents them in simplest possible form"""
return list(re_iter(regex, s, flags)) | Lists all matches of regex in s, presents them in simplest possible form | re_all | python | Suor/funcy | funcy/strings.py | https://github.com/Suor/funcy/blob/master/funcy/strings.py | BSD-3-Clause |
def re_find(regex, s, flags=0):
"""Matches regex against the given string,
returns the match in the simplest possible form."""
return re_finder(regex, flags)(s) | Matches regex against the given string,
returns the match in the simplest possible form. | re_find | python | Suor/funcy | funcy/strings.py | https://github.com/Suor/funcy/blob/master/funcy/strings.py | BSD-3-Clause |
def re_test(regex, s, flags=0):
"""Tests whether regex matches against s."""
return re_tester(regex, flags)(s) | Tests whether regex matches against s. | re_test | python | Suor/funcy | funcy/strings.py | https://github.com/Suor/funcy/blob/master/funcy/strings.py | BSD-3-Clause |
def re_finder(regex, flags=0):
"""Creates a function finding regex in passed string."""
regex, _getter = _prepare(regex, flags)
getter = lambda m: _getter(m) if m else None
return lambda s: getter(regex.search(s)) | Creates a function finding regex in passed string. | re_finder | python | Suor/funcy | funcy/strings.py | https://github.com/Suor/funcy/blob/master/funcy/strings.py | BSD-3-Clause |
def re_tester(regex, flags=0):
"""Creates a predicate testing passed string with regex."""
if not isinstance(regex, _re_type):
regex = re.compile(regex, flags)
return lambda s: bool(regex.search(s)) | Creates a predicate testing passed string with regex. | re_tester | python | Suor/funcy | funcy/strings.py | https://github.com/Suor/funcy/blob/master/funcy/strings.py | BSD-3-Clause |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.