code
stringlengths 2.5k
150k
| kind
stringclasses 1
value |
---|---|
scikit_learn sklearn.neural_network.BernoulliRBM sklearn.neural\_network.BernoulliRBM
====================================
*class*sklearn.neural\_network.BernoulliRBM(*n\_components=256*, *\**, *learning\_rate=0.1*, *batch\_size=10*, *n\_iter=10*, *verbose=0*, *random\_state=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neural_network/_rbm.py#L26)
Bernoulli Restricted Boltzmann Machine (RBM).
A Restricted Boltzmann Machine with binary visible units and binary hidden units. Parameters are estimated using Stochastic Maximum Likelihood (SML), also known as Persistent Contrastive Divergence (PCD) [2].
The time complexity of this implementation is `O(d ** 2)` assuming d ~ n\_features ~ n\_components.
Read more in the [User Guide](../neural_networks_unsupervised#rbm).
Parameters:
**n\_components**int, default=256
Number of binary hidden units.
**learning\_rate**float, default=0.1
The learning rate for weight updates. It is *highly* recommended to tune this hyper-parameter. Reasonable values are in the 10\*\*[0., -3.] range.
**batch\_size**int, default=10
Number of examples per minibatch.
**n\_iter**int, default=10
Number of iterations/sweeps over the training dataset to perform during training.
**verbose**int, default=0
The verbosity level. The default, zero, means silent mode. Range of values is [0, inf].
**random\_state**int, RandomState instance or None, default=None
Determines random number generation for:
* Gibbs sampling from visible and hidden layers.
* Initializing components, sampling from layers during fit.
* Corrupting the data when scoring samples.
Pass an int for reproducible results across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
Attributes:
**intercept\_hidden\_**array-like of shape (n\_components,)
Biases of the hidden units.
**intercept\_visible\_**array-like of shape (n\_features,)
Biases of the visible units.
**components\_**array-like of shape (n\_components, n\_features)
Weight matrix, where `n_features` is the number of visible units and `n_components` is the number of hidden units.
**h\_samples\_**array-like of shape (batch\_size, n\_components)
Hidden Activation sampled from the model distribution, where `batch_size` is the number of examples per minibatch and `n_components` is the number of hidden units.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`sklearn.neural_network.MLPRegressor`](sklearn.neural_network.mlpregressor#sklearn.neural_network.MLPRegressor "sklearn.neural_network.MLPRegressor")
Multi-layer Perceptron regressor.
[`sklearn.neural_network.MLPClassifier`](sklearn.neural_network.mlpclassifier#sklearn.neural_network.MLPClassifier "sklearn.neural_network.MLPClassifier")
Multi-layer Perceptron classifier.
[`sklearn.decomposition.PCA`](sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA")
An unsupervised linear dimensionality reduction model.
#### References
[1] Hinton, G. E., Osindero, S. and Teh, Y. A fast learning algorithm for
deep belief nets. Neural Computation 18, pp 1527-1554. <https://www.cs.toronto.edu/~hinton/absps/fastnc.pdf>
[2] Tieleman, T. Training Restricted Boltzmann Machines using
Approximations to the Likelihood Gradient. International Conference on Machine Learning (ICML) 2008
#### Examples
```
>>> import numpy as np
>>> from sklearn.neural_network import BernoulliRBM
>>> X = np.array([[0, 0, 0], [0, 1, 1], [1, 0, 1], [1, 1, 1]])
>>> model = BernoulliRBM(n_components=2)
>>> model.fit(X)
BernoulliRBM(n_components=2)
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.neural_network.BernoulliRBM.fit "sklearn.neural_network.BernoulliRBM.fit")(X[, y]) | Fit the model to the data X. |
| [`fit_transform`](#sklearn.neural_network.BernoulliRBM.fit_transform "sklearn.neural_network.BernoulliRBM.fit_transform")(X[, y]) | Fit to data, then transform it. |
| [`get_feature_names_out`](#sklearn.neural_network.BernoulliRBM.get_feature_names_out "sklearn.neural_network.BernoulliRBM.get_feature_names_out")([input\_features]) | Get output feature names for transformation. |
| [`get_params`](#sklearn.neural_network.BernoulliRBM.get_params "sklearn.neural_network.BernoulliRBM.get_params")([deep]) | Get parameters for this estimator. |
| [`gibbs`](#sklearn.neural_network.BernoulliRBM.gibbs "sklearn.neural_network.BernoulliRBM.gibbs")(v) | Perform one Gibbs sampling step. |
| [`partial_fit`](#sklearn.neural_network.BernoulliRBM.partial_fit "sklearn.neural_network.BernoulliRBM.partial_fit")(X[, y]) | Fit the model to the partial segment of the data X. |
| [`score_samples`](#sklearn.neural_network.BernoulliRBM.score_samples "sklearn.neural_network.BernoulliRBM.score_samples")(X) | Compute the pseudo-likelihood of X. |
| [`set_params`](#sklearn.neural_network.BernoulliRBM.set_params "sklearn.neural_network.BernoulliRBM.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.neural_network.BernoulliRBM.transform "sklearn.neural_network.BernoulliRBM.transform")(X) | Compute the hidden layer activation probabilities, P(h=1|v=X). |
fit(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neural_network/_rbm.py#L369)
Fit the model to the data X.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Training data.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs), default=None
Target values (None for unsupervised transformations).
Returns:
**self**BernoulliRBM
The fitted model.
fit\_transform(*X*, *y=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L839)
Fit to data, then transform it.
Fits transformer to `X` and `y` with optional parameters `fit_params` and returns a transformed version of `X`.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs), default=None
Target values (None for unsupervised transformations).
**\*\*fit\_params**dict
Additional fit parameters.
Returns:
**X\_new**ndarray array of shape (n\_samples, n\_features\_new)
Transformed array.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L909)
Get output feature names for transformation.
Parameters:
**input\_features**array-like of str or None, default=None
Only used to validate feature names with the names seen in [`fit`](#sklearn.neural_network.BernoulliRBM.fit "sklearn.neural_network.BernoulliRBM.fit").
Returns:
**feature\_names\_out**ndarray of str objects
Transformed feature names.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
gibbs(*v*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neural_network/_rbm.py#L240)
Perform one Gibbs sampling step.
Parameters:
**v**ndarray of shape (n\_samples, n\_features)
Values of the visible layer to start from.
Returns:
**v\_new**ndarray of shape (n\_samples, n\_features)
Values of the visible layer after one Gibbs step.
partial\_fit(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neural_network/_rbm.py#L261)
Fit the model to the partial segment of the data X.
Parameters:
**X**ndarray of shape (n\_samples, n\_features)
Training data.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs), default=None
Target values (None for unsupervised transformations).
Returns:
**self**BernoulliRBM
The fitted model.
score\_samples(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neural_network/_rbm.py#L332)
Compute the pseudo-likelihood of X.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Values of the visible layer. Must be all-boolean (not checked).
Returns:
**pseudo\_likelihood**ndarray of shape (n\_samples,)
Value of the pseudo-likelihood (proxy for likelihood).
#### Notes
This method is not deterministic: it computes a quantity called the free energy on X, then on a randomly corrupted version of X, and returns the log of the logistic function of the difference.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neural_network/_rbm.py#L146)
Compute the hidden layer activation probabilities, P(h=1|v=X).
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The data to be transformed.
Returns:
**h**ndarray of shape (n\_samples, n\_components)
Latent representations of the data.
Examples using `sklearn.neural_network.BernoulliRBM`
----------------------------------------------------
[Restricted Boltzmann Machine features for digit classification](../../auto_examples/neural_networks/plot_rbm_logistic_classification#sphx-glr-auto-examples-neural-networks-plot-rbm-logistic-classification-py)
scikit_learn sklearn.metrics.homogeneity_score sklearn.metrics.homogeneity\_score
==================================
sklearn.metrics.homogeneity\_score(*labels\_true*, *labels\_pred*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/cluster/_supervised.py#L487)
Homogeneity metric of a cluster labeling given a ground truth.
A clustering result satisfies homogeneity if all of its clusters contain only data points which are members of a single class.
This metric is independent of the absolute values of the labels: a permutation of the class or cluster label values won’t change the score value in any way.
This metric is not symmetric: switching `label_true` with `label_pred` will return the [`completeness_score`](sklearn.metrics.completeness_score#sklearn.metrics.completeness_score "sklearn.metrics.completeness_score") which will be different in general.
Read more in the [User Guide](../clustering#homogeneity-completeness).
Parameters:
**labels\_true**int array, shape = [n\_samples]
Ground truth class labels to be used as a reference.
**labels\_pred**array-like of shape (n\_samples,)
Cluster labels to evaluate.
Returns:
**homogeneity**float
Score between 0.0 and 1.0. 1.0 stands for perfectly homogeneous labeling.
See also
[`completeness_score`](sklearn.metrics.completeness_score#sklearn.metrics.completeness_score "sklearn.metrics.completeness_score")
Completeness metric of cluster labeling.
[`v_measure_score`](sklearn.metrics.v_measure_score#sklearn.metrics.v_measure_score "sklearn.metrics.v_measure_score")
V-Measure (NMI with arithmetic mean option).
#### References
[1] [Andrew Rosenberg and Julia Hirschberg, 2007. V-Measure: A conditional entropy-based external cluster evaluation measure](https://aclweb.org/anthology/D/D07/D07-1043.pdf)
#### Examples
Perfect labelings are homogeneous:
```
>>> from sklearn.metrics.cluster import homogeneity_score
>>> homogeneity_score([0, 0, 1, 1], [1, 1, 0, 0])
1.0
```
Non-perfect labelings that further split classes into more clusters can be perfectly homogeneous:
```
>>> print("%.6f" % homogeneity_score([0, 0, 1, 1], [0, 0, 1, 2]))
1.000000
>>> print("%.6f" % homogeneity_score([0, 0, 1, 1], [0, 1, 2, 3]))
1.000000
```
Clusters that include samples from different classes do not make for an homogeneous labeling:
```
>>> print("%.6f" % homogeneity_score([0, 0, 1, 1], [0, 1, 0, 1]))
0.0...
>>> print("%.6f" % homogeneity_score([0, 0, 1, 1], [0, 0, 0, 0]))
0.0...
```
Examples using `sklearn.metrics.homogeneity_score`
--------------------------------------------------
[A demo of K-Means clustering on the handwritten digits data](../../auto_examples/cluster/plot_kmeans_digits#sphx-glr-auto-examples-cluster-plot-kmeans-digits-py)
[Demo of DBSCAN clustering algorithm](../../auto_examples/cluster/plot_dbscan#sphx-glr-auto-examples-cluster-plot-dbscan-py)
[Demo of affinity propagation clustering algorithm](../../auto_examples/cluster/plot_affinity_propagation#sphx-glr-auto-examples-cluster-plot-affinity-propagation-py)
[Clustering text documents using k-means](../../auto_examples/text/plot_document_clustering#sphx-glr-auto-examples-text-plot-document-clustering-py)
scikit_learn sklearn.metrics.d2_tweedie_score sklearn.metrics.d2\_tweedie\_score
==================================
sklearn.metrics.d2\_tweedie\_score(*y\_true*, *y\_pred*, *\**, *sample\_weight=None*, *power=0*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/_regression.py#L1166)
D^2 regression score function, fraction of Tweedie deviance explained.
Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A model that always uses the empirical mean of `y_true` as constant prediction, disregarding the input features, gets a D^2 score of 0.0.
Read more in the [User Guide](../model_evaluation#d2-score).
New in version 1.0.
Parameters:
**y\_true**array-like of shape (n\_samples,)
Ground truth (correct) target values.
**y\_pred**array-like of shape (n\_samples,)
Estimated target values.
**sample\_weight**array-like of shape (n\_samples,), optional
Sample weights.
**power**float, default=0
Tweedie power parameter. Either power <= 0 or power >= 1.
The higher `p` the less weight is given to extreme deviations between true and predicted targets.
* power < 0: Extreme stable distribution. Requires: y\_pred > 0.
* power = 0 : Normal distribution, output corresponds to r2\_score. y\_true and y\_pred can be any real numbers.
* power = 1 : Poisson distribution. Requires: y\_true >= 0 and y\_pred > 0.
* 1 < p < 2 : Compound Poisson distribution. Requires: y\_true >= 0 and y\_pred > 0.
* power = 2 : Gamma distribution. Requires: y\_true > 0 and y\_pred > 0.
* power = 3 : Inverse Gaussian distribution. Requires: y\_true > 0 and y\_pred > 0.
* otherwise : Positive stable distribution. Requires: y\_true > 0 and y\_pred > 0.
Returns:
**z**float or ndarray of floats
The D^2 score.
#### Notes
This is not a symmetric function.
Like R^2, D^2 score may be negative (it need not actually be the square of a quantity D).
This metric is not well-defined for single samples and will return a NaN value if n\_samples is less than two.
#### References
[1] Eq. (3.11) of Hastie, Trevor J., Robert Tibshirani and Martin J. Wainwright. “Statistical Learning with Sparsity: The Lasso and Generalizations.” (2015). <https://hastie.su.domains/StatLearnSparsity/>
#### Examples
```
>>> from sklearn.metrics import d2_tweedie_score
>>> y_true = [0.5, 1, 2.5, 7]
>>> y_pred = [1, 1, 5, 3.5]
>>> d2_tweedie_score(y_true, y_pred)
0.285...
>>> d2_tweedie_score(y_true, y_pred, power=1)
0.487...
>>> d2_tweedie_score(y_true, y_pred, power=2)
0.630...
>>> d2_tweedie_score(y_true, y_true, power=2)
1.0
```
scikit_learn sklearn.metrics.pairwise.chi2_kernel sklearn.metrics.pairwise.chi2\_kernel
=====================================
sklearn.metrics.pairwise.chi2\_kernel(*X*, *Y=None*, *gamma=1.0*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/pairwise.py#L1451)
Compute the exponential chi-squared kernel between X and Y.
The chi-squared kernel is computed between each pair of rows in X and Y. X and Y have to be non-negative. This kernel is most commonly applied to histograms.
The chi-squared kernel is given by:
```
k(x, y) = exp(-gamma Sum [(x - y)^2 / (x + y)])
```
It can be interpreted as a weighted difference per entry.
Read more in the [User Guide](../metrics#chi2-kernel).
Parameters:
**X**array-like of shape (n\_samples\_X, n\_features)
A feature array.
**Y**ndarray of shape (n\_samples\_Y, n\_features), default=None
An optional second feature array. If `None`, uses `Y=X`.
**gamma**float, default=1
Scaling parameter of the chi2 kernel.
Returns:
**kernel\_matrix**ndarray of shape (n\_samples\_X, n\_samples\_Y)
The kernel matrix.
See also
[`additive_chi2_kernel`](sklearn.metrics.pairwise.additive_chi2_kernel#sklearn.metrics.pairwise.additive_chi2_kernel "sklearn.metrics.pairwise.additive_chi2_kernel")
The additive version of this kernel.
[`sklearn.kernel_approximation.AdditiveChi2Sampler`](sklearn.kernel_approximation.additivechi2sampler#sklearn.kernel_approximation.AdditiveChi2Sampler "sklearn.kernel_approximation.AdditiveChi2Sampler")
A Fourier approximation to the additive version of this kernel.
#### References
* Zhang, J. and Marszalek, M. and Lazebnik, S. and Schmid, C. Local features and kernels for classification of texture and object categories: A comprehensive study International Journal of Computer Vision 2007 <https://hal.archives-ouvertes.fr/hal-00171412/document>
scikit_learn sklearn.neighbors.LocalOutlierFactor sklearn.neighbors.LocalOutlierFactor
====================================
*class*sklearn.neighbors.LocalOutlierFactor(*n\_neighbors=20*, *\**, *algorithm='auto'*, *leaf\_size=30*, *metric='minkowski'*, *p=2*, *metric\_params=None*, *contamination='auto'*, *novelty=False*, *n\_jobs=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_lof.py#L19)
Unsupervised Outlier Detection using the Local Outlier Factor (LOF).
The anomaly score of each sample is called the Local Outlier Factor. It measures the local deviation of the density of a given sample with respect to its neighbors. It is local in that the anomaly score depends on how isolated the object is with respect to the surrounding neighborhood. More precisely, locality is given by k-nearest neighbors, whose distance is used to estimate the local density. By comparing the local density of a sample to the local densities of its neighbors, one can identify samples that have a substantially lower density than their neighbors. These are considered outliers.
New in version 0.19.
Parameters:
**n\_neighbors**int, default=20
Number of neighbors to use by default for [`kneighbors`](#sklearn.neighbors.LocalOutlierFactor.kneighbors "sklearn.neighbors.LocalOutlierFactor.kneighbors") queries. If n\_neighbors is larger than the number of samples provided, all samples will be used.
**algorithm**{‘auto’, ‘ball\_tree’, ‘kd\_tree’, ‘brute’}, default=’auto’
Algorithm used to compute the nearest neighbors:
* ‘ball\_tree’ will use [`BallTree`](sklearn.neighbors.balltree#sklearn.neighbors.BallTree "sklearn.neighbors.BallTree")
* ‘kd\_tree’ will use [`KDTree`](sklearn.neighbors.kdtree#sklearn.neighbors.KDTree "sklearn.neighbors.KDTree")
* ‘brute’ will use a brute-force search.
* ‘auto’ will attempt to decide the most appropriate algorithm based on the values passed to [`fit`](#sklearn.neighbors.LocalOutlierFactor.fit "sklearn.neighbors.LocalOutlierFactor.fit") method.
Note: fitting on sparse input will override the setting of this parameter, using brute force.
**leaf\_size**int, default=30
Leaf is size passed to [`BallTree`](sklearn.neighbors.balltree#sklearn.neighbors.BallTree "sklearn.neighbors.BallTree") or [`KDTree`](sklearn.neighbors.kdtree#sklearn.neighbors.KDTree "sklearn.neighbors.KDTree"). This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem.
**metric**str or callable, default=’minkowski’
Metric to use for distance computation. Default is “minkowski”, which results in the standard Euclidean distance when p = 2. See the documentation of [scipy.spatial.distance](https://docs.scipy.org/doc/scipy/reference/spatial.distance.html) and the metrics listed in [`distance_metrics`](sklearn.metrics.pairwise.distance_metrics#sklearn.metrics.pairwise.distance_metrics "sklearn.metrics.pairwise.distance_metrics") for valid metric values.
If metric is “precomputed”, X is assumed to be a distance matrix and must be square during fit. X may be a [sparse graph](https://scikit-learn.org/1.1/glossary.html#term-sparse-graph), in which case only “nonzero” elements may be considered neighbors.
If metric is a callable function, it takes two arrays representing 1D vectors as inputs and must return one value indicating the distance between those vectors. This works for Scipy’s metrics, but is less efficient than passing the metric name as a string.
**p**int, default=2
Parameter for the Minkowski metric from `sklearn.metrics.pairwise.pairwise_distances`. When p = 1, this is equivalent to using manhattan\_distance (l1), and euclidean\_distance (l2) for p = 2. For arbitrary p, minkowski\_distance (l\_p) is used.
**metric\_params**dict, default=None
Additional keyword arguments for the metric function.
**contamination**‘auto’ or float, default=’auto’
The amount of contamination of the data set, i.e. the proportion of outliers in the data set. When fitting this is used to define the threshold on the scores of the samples.
* if ‘auto’, the threshold is determined as in the original paper,
* if a float, the contamination should be in the range (0, 0.5].
Changed in version 0.22: The default value of `contamination` changed from 0.1 to `'auto'`.
**novelty**bool, default=False
By default, LocalOutlierFactor is only meant to be used for outlier detection (novelty=False). Set novelty to True if you want to use LocalOutlierFactor for novelty detection. In this case be aware that you should only use predict, decision\_function and score\_samples on new unseen data and not on the training set; and note that the results obtained this way may differ from the standard LOF results.
New in version 0.20.
**n\_jobs**int, default=None
The number of parallel jobs to run for neighbors search. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
Attributes:
**negative\_outlier\_factor\_**ndarray of shape (n\_samples,)
The opposite LOF of the training samples. The higher, the more normal. Inliers tend to have a LOF score close to 1 (`negative_outlier_factor_` close to -1), while outliers tend to have a larger LOF score.
The local outlier factor (LOF) of a sample captures its supposed ‘degree of abnormality’. It is the average of the ratio of the local reachability density of a sample and those of its k-nearest neighbors.
**n\_neighbors\_**int
The actual number of neighbors used for [`kneighbors`](#sklearn.neighbors.LocalOutlierFactor.kneighbors "sklearn.neighbors.LocalOutlierFactor.kneighbors") queries.
**offset\_**float
Offset used to obtain binary labels from the raw scores. Observations having a negative\_outlier\_factor smaller than `offset_` are detected as abnormal. The offset is set to -1.5 (inliers score around -1), except when a contamination parameter different than “auto” is provided. In that case, the offset is defined in such a way we obtain the expected number of outliers in training.
New in version 0.20.
**effective\_metric\_**str
The effective metric used for the distance computation.
**effective\_metric\_params\_**dict
The effective additional keyword arguments for the metric function.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
**n\_samples\_fit\_**int
It is the number of samples in the fitted data.
See also
[`sklearn.svm.OneClassSVM`](sklearn.svm.oneclasssvm#sklearn.svm.OneClassSVM "sklearn.svm.OneClassSVM")
Unsupervised Outlier Detection using Support Vector Machine.
#### References
[1] Breunig, M. M., Kriegel, H. P., Ng, R. T., & Sander, J. (2000, May). LOF: identifying density-based local outliers. In ACM sigmod record.
#### Examples
```
>>> import numpy as np
>>> from sklearn.neighbors import LocalOutlierFactor
>>> X = [[-1.1], [0.2], [101.1], [0.3]]
>>> clf = LocalOutlierFactor(n_neighbors=2)
>>> clf.fit_predict(X)
array([ 1, 1, -1, 1])
>>> clf.negative_outlier_factor_
array([ -0.9821..., -1.0370..., -73.3697..., -0.9821...])
```
#### Methods
| | |
| --- | --- |
| [`decision_function`](#sklearn.neighbors.LocalOutlierFactor.decision_function "sklearn.neighbors.LocalOutlierFactor.decision_function")(X) | Shifted opposite of the Local Outlier Factor of X. |
| [`fit`](#sklearn.neighbors.LocalOutlierFactor.fit "sklearn.neighbors.LocalOutlierFactor.fit")(X[, y]) | Fit the local outlier factor detector from the training dataset. |
| [`fit_predict`](#sklearn.neighbors.LocalOutlierFactor.fit_predict "sklearn.neighbors.LocalOutlierFactor.fit_predict")(X[, y]) | Fit the model to the training set X and return the labels. |
| [`get_params`](#sklearn.neighbors.LocalOutlierFactor.get_params "sklearn.neighbors.LocalOutlierFactor.get_params")([deep]) | Get parameters for this estimator. |
| [`kneighbors`](#sklearn.neighbors.LocalOutlierFactor.kneighbors "sklearn.neighbors.LocalOutlierFactor.kneighbors")([X, n\_neighbors, return\_distance]) | Find the K-neighbors of a point. |
| [`kneighbors_graph`](#sklearn.neighbors.LocalOutlierFactor.kneighbors_graph "sklearn.neighbors.LocalOutlierFactor.kneighbors_graph")([X, n\_neighbors, mode]) | Compute the (weighted) graph of k-Neighbors for points in X. |
| [`predict`](#sklearn.neighbors.LocalOutlierFactor.predict "sklearn.neighbors.LocalOutlierFactor.predict")([X]) | Predict the labels (1 inlier, -1 outlier) of X according to LOF. |
| [`score_samples`](#sklearn.neighbors.LocalOutlierFactor.score_samples "sklearn.neighbors.LocalOutlierFactor.score_samples")(X) | Opposite of the Local Outlier Factor of X. |
| [`set_params`](#sklearn.neighbors.LocalOutlierFactor.set_params "sklearn.neighbors.LocalOutlierFactor.set_params")(\*\*params) | Set the parameters of this estimator. |
decision\_function(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_lof.py#L383)
Shifted opposite of the Local Outlier Factor of X.
Bigger is better, i.e. large values correspond to inliers.
**Only available for novelty detection (when novelty is set to True).** The shift offset allows a zero threshold for being an outlier. The argument X is supposed to contain *new data*: if X contains a point from training, it considers the later in its own neighborhood. Also, the samples in X are not considered in the neighborhood of any point.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The query sample or samples to compute the Local Outlier Factor w.r.t. the training samples.
Returns:
**shifted\_opposite\_lof\_scores**ndarray of shape (n\_samples,)
The shifted opposite of the Local Outlier Factor of each input samples. The lower, the more abnormal. Negative scores represent outliers, positive scores represent inliers.
fit(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_lof.py#L247)
Fit the local outlier factor detector from the training dataset.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features) or (n\_samples, n\_samples) if metric=’precomputed’
Training data.
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**self**LocalOutlierFactor
The fitted local outlier factor detector.
fit\_predict(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_lof.py#L219)
Fit the model to the training set X and return the labels.
**Not available for novelty detection (when novelty is set to True).** Label is 1 for an inlier and -1 for an outlier according to the LOF score and the contamination parameter.
Parameters:
**X**array-like of shape (n\_samples, n\_features), default=None
The query sample or samples to compute the Local Outlier Factor w.r.t. to the training samples.
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**is\_inlier**ndarray of shape (n\_samples,)
Returns -1 for anomalies/outliers and 1 for inliers.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
kneighbors(*X=None*, *n\_neighbors=None*, *return\_distance=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_base.py#L670)
Find the K-neighbors of a point.
Returns indices of and distances to the neighbors of each point.
Parameters:
**X**array-like, shape (n\_queries, n\_features), or (n\_queries, n\_indexed) if metric == ‘precomputed’, default=None
The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor.
**n\_neighbors**int, default=None
Number of neighbors required for each sample. The default is the value passed to the constructor.
**return\_distance**bool, default=True
Whether or not to return the distances.
Returns:
**neigh\_dist**ndarray of shape (n\_queries, n\_neighbors)
Array representing the lengths to points, only present if return\_distance=True.
**neigh\_ind**ndarray of shape (n\_queries, n\_neighbors)
Indices of the nearest points in the population matrix.
#### Examples
In the following example, we construct a NearestNeighbors class from an array representing our data set and ask who’s the closest point to [1,1,1]
```
>>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(n_neighbors=1)
>>> neigh.fit(samples)
NearestNeighbors(n_neighbors=1)
>>> print(neigh.kneighbors([[1., 1., 1.]]))
(array([[0.5]]), array([[2]]))
```
As you can see, it returns [[0.5]], and [[2]], which means that the element is at distance 0.5 and is the third element of samples (indexes start at 0). You can also query for multiple points:
```
>>> X = [[0., 1., 0.], [1., 0., 1.]]
>>> neigh.kneighbors(X, return_distance=False)
array([[1],
[2]]...)
```
kneighbors\_graph(*X=None*, *n\_neighbors=None*, *mode='connectivity'*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_base.py#L860)
Compute the (weighted) graph of k-Neighbors for points in X.
Parameters:
**X**array-like of shape (n\_queries, n\_features), or (n\_queries, n\_indexed) if metric == ‘precomputed’, default=None
The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. For `metric='precomputed'` the shape should be (n\_queries, n\_indexed). Otherwise the shape should be (n\_queries, n\_features).
**n\_neighbors**int, default=None
Number of neighbors for each sample. The default is the value passed to the constructor.
**mode**{‘connectivity’, ‘distance’}, default=’connectivity’
Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are distances between points, type of distance depends on the selected metric parameter in NearestNeighbors class.
Returns:
**A**sparse-matrix of shape (n\_queries, n\_samples\_fit)
`n_samples_fit` is the number of samples in the fitted data. `A[i, j]` gives the weight of the edge connecting `i` to `j`. The matrix is of CSR format.
See also
[`NearestNeighbors.radius_neighbors_graph`](sklearn.neighbors.nearestneighbors#sklearn.neighbors.NearestNeighbors.radius_neighbors_graph "sklearn.neighbors.NearestNeighbors.radius_neighbors_graph")
Compute the (weighted) graph of Neighbors for points in X.
#### Examples
```
>>> X = [[0], [3], [1]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(n_neighbors=2)
>>> neigh.fit(X)
NearestNeighbors(n_neighbors=2)
>>> A = neigh.kneighbors_graph(X)
>>> A.toarray()
array([[1., 0., 1.],
[0., 1., 1.],
[1., 0., 1.]])
```
predict(*X=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_lof.py#L318)
Predict the labels (1 inlier, -1 outlier) of X according to LOF.
**Only available for novelty detection (when novelty is set to True).** This method allows to generalize prediction to *new observations* (not in the training set). Note that the result of `clf.fit(X)` then `clf.predict(X)` with `novelty=True` may differ from the result obtained by `clf.fit_predict(X)` with `novelty=False`.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The query sample or samples to compute the Local Outlier Factor w.r.t. to the training samples.
Returns:
**is\_inlier**ndarray of shape (n\_samples,)
Returns -1 for anomalies/outliers and +1 for inliers.
score\_samples(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_lof.py#L423)
Opposite of the Local Outlier Factor of X.
It is the opposite as bigger is better, i.e. large values correspond to inliers.
**Only available for novelty detection (when novelty is set to True).** The argument X is supposed to contain *new data*: if X contains a point from training, it considers the later in its own neighborhood. Also, the samples in X are not considered in the neighborhood of any point. Because of this, the scores obtained via `score_samples` may differ from the standard LOF scores. The standard LOF scores for the training data is available via the `negative_outlier_factor_` attribute.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The query sample or samples to compute the Local Outlier Factor w.r.t. the training samples.
Returns:
**opposite\_lof\_scores**ndarray of shape (n\_samples,)
The opposite of the Local Outlier Factor of each input samples. The lower, the more abnormal.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.neighbors.LocalOutlierFactor`
-----------------------------------------------------
[Comparing anomaly detection algorithms for outlier detection on toy datasets](../../auto_examples/miscellaneous/plot_anomaly_comparison#sphx-glr-auto-examples-miscellaneous-plot-anomaly-comparison-py)
[Evaluation of outlier detection estimators](../../auto_examples/miscellaneous/plot_outlier_detection_bench#sphx-glr-auto-examples-miscellaneous-plot-outlier-detection-bench-py)
[Novelty detection with Local Outlier Factor (LOF)](../../auto_examples/neighbors/plot_lof_novelty_detection#sphx-glr-auto-examples-neighbors-plot-lof-novelty-detection-py)
[Outlier detection with Local Outlier Factor (LOF)](../../auto_examples/neighbors/plot_lof_outlier_detection#sphx-glr-auto-examples-neighbors-plot-lof-outlier-detection-py)
| programming_docs |
scikit_learn sklearn.model_selection.ParameterGrid sklearn.model\_selection.ParameterGrid
======================================
*class*sklearn.model\_selection.ParameterGrid(*param\_grid*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L49)
Grid of parameters with a discrete number of values for each.
Can be used to iterate over parameter value combinations with the Python built-in function iter. The order of the generated parameter combinations is deterministic.
Read more in the [User Guide](../grid_search#grid-search).
Parameters:
**param\_grid**dict of str to sequence, or sequence of such
The parameter grid to explore, as a dictionary mapping estimator parameters to sequences of allowed values.
An empty dict signifies default parameters.
A sequence of dicts signifies a sequence of grids to search, and is useful to avoid exploring parameter combinations that make no sense or have no effect. See the examples below.
See also
[`GridSearchCV`](sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV")
Uses [`ParameterGrid`](#sklearn.model_selection.ParameterGrid "sklearn.model_selection.ParameterGrid") to perform a full parallelized parameter search.
#### Examples
```
>>> from sklearn.model_selection import ParameterGrid
>>> param_grid = {'a': [1, 2], 'b': [True, False]}
>>> list(ParameterGrid(param_grid)) == (
... [{'a': 1, 'b': True}, {'a': 1, 'b': False},
... {'a': 2, 'b': True}, {'a': 2, 'b': False}])
True
```
```
>>> grid = [{'kernel': ['linear']}, {'kernel': ['rbf'], 'gamma': [1, 10]}]
>>> list(ParameterGrid(grid)) == [{'kernel': 'linear'},
... {'kernel': 'rbf', 'gamma': 1},
... {'kernel': 'rbf', 'gamma': 10}]
True
>>> ParameterGrid(grid)[1] == {'kernel': 'rbf', 'gamma': 1}
True
```
scikit_learn sklearn.preprocessing.maxabs_scale sklearn.preprocessing.maxabs\_scale
===================================
sklearn.preprocessing.maxabs\_scale(*X*, *\**, *axis=0*, *copy=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_data.py#L1259)
Scale each feature to the [-1, 1] range without breaking the sparsity.
This estimator scales each feature individually such that the maximal absolute value of each feature in the training set will be 1.0.
This scaler can also be applied to sparse CSR or CSC matrices.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The data.
**axis**int, default=0
axis used to scale along. If 0, independently scale each feature, otherwise (if 1) scale each sample.
**copy**bool, default=True
Set to False to perform inplace scaling and avoid a copy (if the input is already a numpy array).
Returns:
**X\_tr**{ndarray, sparse matrix} of shape (n\_samples, n\_features)
The transformed data.
Warning
Risk of data leak Do not use [`maxabs_scale`](#sklearn.preprocessing.maxabs_scale "sklearn.preprocessing.maxabs_scale") unless you know what you are doing. A common mistake is to apply it to the entire data *before* splitting into training and test sets. This will bias the model evaluation because information would have leaked from the test set to the training set. In general, we recommend using [`MaxAbsScaler`](sklearn.preprocessing.maxabsscaler#sklearn.preprocessing.MaxAbsScaler "sklearn.preprocessing.MaxAbsScaler") within a [Pipeline](../compose#pipeline) in order to prevent most risks of data leaking: `pipe = make_pipeline(MaxAbsScaler(), LogisticRegression())`.
See also
[`MaxAbsScaler`](sklearn.preprocessing.maxabsscaler#sklearn.preprocessing.MaxAbsScaler "sklearn.preprocessing.MaxAbsScaler")
Performs scaling to the [-1, 1] range using the Transformer API (e.g. as part of a preprocessing [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")).
#### Notes
NaNs are treated as missing values: disregarded to compute the statistics, and maintained during the data transformation.
For a comparison of the different scalers, transformers, and normalizers, see [examples/preprocessing/plot\_all\_scaling.py](../../auto_examples/preprocessing/plot_all_scaling#sphx-glr-auto-examples-preprocessing-plot-all-scaling-py).
scikit_learn sklearn.datasets.dump_svmlight_file sklearn.datasets.dump\_svmlight\_file
=====================================
sklearn.datasets.dump\_svmlight\_file(*X*, *y*, *f*, *\**, *zero\_based=True*, *comment=None*, *query\_id=None*, *multilabel=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/datasets/_svmlight_format_io.py#L427)
Dump the dataset in svmlight / libsvm file format.
This format is a text-based format, with one sample per line. It does not store zero valued features hence is suitable for sparse dataset.
The first element of each line can be used to store a target variable to predict.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Training vectors, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**{array-like, sparse matrix}, shape = [n\_samples (, n\_labels)]
Target values. Class labels must be an integer or float, or array-like objects of integer or float for multilabel classifications.
**f**str or file-like in binary mode
If string, specifies the path that will contain the data. If file-like, data will be written to f. f should be opened in binary mode.
**zero\_based**bool, default=True
Whether column indices should be written zero-based (True) or one-based (False).
**comment**str, default=None
Comment to insert at the top of the file. This should be either a Unicode string, which will be encoded as UTF-8, or an ASCII byte string. If a comment is given, then it will be preceded by one that identifies the file as having been dumped by scikit-learn. Note that not all tools grok comments in SVMlight files.
**query\_id**array-like of shape (n\_samples,), default=None
Array containing pairwise preference constraints (qid in svmlight format).
**multilabel**bool, default=False
Samples may have several labels each (see <https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multilabel.html>).
New in version 0.17: parameter *multilabel* to support multilabel datasets.
Examples using `sklearn.datasets.dump_svmlight_file`
----------------------------------------------------
[Libsvm GUI](../../auto_examples/applications/svm_gui#sphx-glr-auto-examples-applications-svm-gui-py)
scikit_learn sklearn.metrics.d2_absolute_error_score sklearn.metrics.d2\_absolute\_error\_score
==========================================
sklearn.metrics.d2\_absolute\_error\_score(*y\_true*, *y\_pred*, *\**, *sample\_weight=None*, *multioutput='uniform\_average'*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/_regression.py#L1411)
\(D^2\) regression score function, fraction of absolute error explained.
Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A model that always uses the empirical median of `y_true` as constant prediction, disregarding the input features, gets a \(D^2\) score of 0.0.
Read more in the [User Guide](../model_evaluation#d2-score).
New in version 1.1.
Parameters:
**y\_true**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
Ground truth (correct) target values.
**y\_pred**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
Estimated target values.
**sample\_weight**array-like of shape (n\_samples,), optional
Sample weights.
**multioutput**{‘raw\_values’, ‘uniform\_average’} or array-like of shape (n\_outputs,), default=’uniform\_average’
Defines aggregating of multiple output values. Array-like value defines weights used to average scores.
‘raw\_values’ :
Returns a full set of errors in case of multioutput input.
‘uniform\_average’ :
Scores of all outputs are averaged with uniform weight.
Returns:
**score**float or ndarray of floats
The \(D^2\) score with an absolute error deviance or ndarray of scores if ‘multioutput’ is ‘raw\_values’.
#### Notes
Like \(R^2\), \(D^2\) score may be negative (it need not actually be the square of a quantity D).
This metric is not well-defined for single samples and will return a NaN value if n\_samples is less than two.
#### References
[1] Eq. (3.11) of Hastie, Trevor J., Robert Tibshirani and Martin J. Wainwright. “Statistical Learning with Sparsity: The Lasso and Generalizations.” (2015). <https://hastie.su.domains/StatLearnSparsity/>
#### Examples
```
>>> from sklearn.metrics import d2_absolute_error_score
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> d2_absolute_error_score(y_true, y_pred)
0.764...
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> d2_absolute_error_score(y_true, y_pred, multioutput='uniform_average')
0.691...
>>> d2_absolute_error_score(y_true, y_pred, multioutput='raw_values')
array([0.8125 , 0.57142857])
>>> y_true = [1, 2, 3]
>>> y_pred = [1, 2, 3]
>>> d2_absolute_error_score(y_true, y_pred)
1.0
>>> y_true = [1, 2, 3]
>>> y_pred = [2, 2, 2]
>>> d2_absolute_error_score(y_true, y_pred)
0.0
>>> y_true = [1, 2, 3]
>>> y_pred = [3, 2, 1]
>>> d2_absolute_error_score(y_true, y_pred)
-1.0
```
scikit_learn sklearn.decomposition.MiniBatchSparsePCA sklearn.decomposition.MiniBatchSparsePCA
========================================
*class*sklearn.decomposition.MiniBatchSparsePCA(*n\_components=None*, *\**, *alpha=1*, *ridge\_alpha=0.01*, *n\_iter=100*, *callback=None*, *batch\_size=3*, *verbose=False*, *shuffle=True*, *n\_jobs=None*, *method='lars'*, *random\_state=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/decomposition/_sparse_pca.py#L252)
Mini-batch Sparse Principal Components Analysis.
Finds the set of sparse components that can optimally reconstruct the data. The amount of sparseness is controllable by the coefficient of the L1 penalty, given by the parameter alpha.
Read more in the [User Guide](../decomposition#sparsepca).
Parameters:
**n\_components**int, default=None
Number of sparse atoms to extract. If None, then `n_components` is set to `n_features`.
**alpha**int, default=1
Sparsity controlling parameter. Higher values lead to sparser components.
**ridge\_alpha**float, default=0.01
Amount of ridge shrinkage to apply in order to improve conditioning when calling the transform method.
**n\_iter**int, default=100
Number of iterations to perform for each mini batch.
**callback**callable, default=None
Callable that gets invoked every five iterations.
**batch\_size**int, default=3
The number of features to take in each mini batch.
**verbose**int or bool, default=False
Controls the verbosity; the higher, the more messages. Defaults to 0.
**shuffle**bool, default=True
Whether to shuffle the data before splitting it in batches.
**n\_jobs**int, default=None
Number of parallel jobs to run. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
**method**{‘lars’, ‘cd’}, default=’lars’
Method to be used for optimization. lars: uses the least angle regression method to solve the lasso problem (linear\_model.lars\_path) cd: uses the coordinate descent method to compute the Lasso solution (linear\_model.Lasso). Lars will be faster if the estimated components are sparse.
**random\_state**int, RandomState instance or None, default=None
Used for random shuffling when `shuffle` is set to `True`, during online dictionary learning. Pass an int for reproducible results across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
Attributes:
**components\_**ndarray of shape (n\_components, n\_features)
Sparse components extracted from the data.
**n\_components\_**int
Estimated number of components.
New in version 0.23.
**n\_iter\_**int
Number of iterations run.
**mean\_**ndarray of shape (n\_features,)
Per-feature empirical mean, estimated from the training set. Equal to `X.mean(axis=0)`.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`DictionaryLearning`](sklearn.decomposition.dictionarylearning#sklearn.decomposition.DictionaryLearning "sklearn.decomposition.DictionaryLearning")
Find a dictionary that sparsely encodes data.
[`IncrementalPCA`](sklearn.decomposition.incrementalpca#sklearn.decomposition.IncrementalPCA "sklearn.decomposition.IncrementalPCA")
Incremental principal components analysis.
[`PCA`](sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA")
Principal component analysis.
[`SparsePCA`](sklearn.decomposition.sparsepca#sklearn.decomposition.SparsePCA "sklearn.decomposition.SparsePCA")
Sparse Principal Components Analysis.
[`TruncatedSVD`](sklearn.decomposition.truncatedsvd#sklearn.decomposition.TruncatedSVD "sklearn.decomposition.TruncatedSVD")
Dimensionality reduction using truncated SVD.
#### Examples
```
>>> import numpy as np
>>> from sklearn.datasets import make_friedman1
>>> from sklearn.decomposition import MiniBatchSparsePCA
>>> X, _ = make_friedman1(n_samples=200, n_features=30, random_state=0)
>>> transformer = MiniBatchSparsePCA(n_components=5, batch_size=50,
... random_state=0)
>>> transformer.fit(X)
MiniBatchSparsePCA(...)
>>> X_transformed = transformer.transform(X)
>>> X_transformed.shape
(200, 5)
>>> # most values in the components_ are zero (sparsity)
>>> np.mean(transformer.components_ == 0)
0.94
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.decomposition.MiniBatchSparsePCA.fit "sklearn.decomposition.MiniBatchSparsePCA.fit")(X[, y]) | Fit the model from data in X. |
| [`fit_transform`](#sklearn.decomposition.MiniBatchSparsePCA.fit_transform "sklearn.decomposition.MiniBatchSparsePCA.fit_transform")(X[, y]) | Fit to data, then transform it. |
| [`get_feature_names_out`](#sklearn.decomposition.MiniBatchSparsePCA.get_feature_names_out "sklearn.decomposition.MiniBatchSparsePCA.get_feature_names_out")([input\_features]) | Get output feature names for transformation. |
| [`get_params`](#sklearn.decomposition.MiniBatchSparsePCA.get_params "sklearn.decomposition.MiniBatchSparsePCA.get_params")([deep]) | Get parameters for this estimator. |
| [`set_params`](#sklearn.decomposition.MiniBatchSparsePCA.set_params "sklearn.decomposition.MiniBatchSparsePCA.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.decomposition.MiniBatchSparsePCA.transform "sklearn.decomposition.MiniBatchSparsePCA.transform")(X) | Least Squares projection of the data onto the sparse components. |
fit(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/decomposition/_sparse_pca.py#L393)
Fit the model from data in X.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training vector, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**self**object
Returns the instance itself.
fit\_transform(*X*, *y=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L839)
Fit to data, then transform it.
Fits transformer to `X` and `y` with optional parameters `fit_params` and returns a transformed version of `X`.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs), default=None
Target values (None for unsupervised transformations).
**\*\*fit\_params**dict
Additional fit parameters.
Returns:
**X\_new**ndarray array of shape (n\_samples, n\_features\_new)
Transformed array.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L909)
Get output feature names for transformation.
Parameters:
**input\_features**array-like of str or None, default=None
Only used to validate feature names with the names seen in [`fit`](#sklearn.decomposition.MiniBatchSparsePCA.fit "sklearn.decomposition.MiniBatchSparsePCA.fit").
Returns:
**feature\_names\_out**ndarray of str objects
Transformed feature names.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/decomposition/_sparse_pca.py#L209)
Least Squares projection of the data onto the sparse components.
To avoid instability issues in case the system is under-determined, regularization can be applied (Ridge regression) via the `ridge_alpha` parameter.
Note that Sparse PCA components orthogonality is not enforced as in PCA hence one cannot use a simple linear projection.
Parameters:
**X**ndarray of shape (n\_samples, n\_features)
Test data to be transformed, must have the same number of features as the data used to train the model.
Returns:
**X\_new**ndarray of shape (n\_samples, n\_components)
Transformed data.
Examples using `sklearn.decomposition.MiniBatchSparsePCA`
---------------------------------------------------------
[Faces dataset decompositions](../../auto_examples/decomposition/plot_faces_decomposition#sphx-glr-auto-examples-decomposition-plot-faces-decomposition-py)
scikit_learn sklearn.linear_model.Ridge sklearn.linear\_model.Ridge
===========================
*class*sklearn.linear\_model.Ridge(*alpha=1.0*, *\**, *fit\_intercept=True*, *normalize='deprecated'*, *copy\_X=True*, *max\_iter=None*, *tol=0.001*, *solver='auto'*, *positive=False*, *random\_state=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_ridge.py#L910)
Linear least squares with l2 regularization.
Minimizes the objective function:
```
||y - Xw||^2_2 + alpha * ||w||^2_2
```
This model solves a regression model where the loss function is the linear least squares function and regularization is given by the l2-norm. Also known as Ridge Regression or Tikhonov regularization. This estimator has built-in support for multi-variate regression (i.e., when y is a 2d-array of shape (n\_samples, n\_targets)).
Read more in the [User Guide](../linear_model#ridge-regression).
Parameters:
**alpha**{float, ndarray of shape (n\_targets,)}, default=1.0
Constant that multiplies the L2 term, controlling regularization strength. `alpha` must be a non-negative float i.e. in `[0, inf)`.
When `alpha = 0`, the objective is equivalent to ordinary least squares, solved by the [`LinearRegression`](sklearn.linear_model.linearregression#sklearn.linear_model.LinearRegression "sklearn.linear_model.LinearRegression") object. For numerical reasons, using `alpha = 0` with the `Ridge` object is not advised. Instead, you should use the [`LinearRegression`](sklearn.linear_model.linearregression#sklearn.linear_model.LinearRegression "sklearn.linear_model.LinearRegression") object.
If an array is passed, penalties are assumed to be specific to the targets. Hence they must correspond in number.
**fit\_intercept**bool, default=True
Whether to fit the intercept for this model. If set to false, no intercept will be used in calculations (i.e. `X` and `y` are expected to be centered).
**normalize**bool, default=False
This parameter is ignored when `fit_intercept` is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use [`StandardScaler`](sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler") before calling `fit` on an estimator with `normalize=False`.
Deprecated since version 1.0: `normalize` was deprecated in version 1.0 and will be removed in 1.2.
**copy\_X**bool, default=True
If True, X will be copied; else, it may be overwritten.
**max\_iter**int, default=None
Maximum number of iterations for conjugate gradient solver. For ‘sparse\_cg’ and ‘lsqr’ solvers, the default value is determined by scipy.sparse.linalg. For ‘sag’ solver, the default value is 1000. For ‘lbfgs’ solver, the default value is 15000.
**tol**float, default=1e-3
Precision of the solution.
**solver**{‘auto’, ‘svd’, ‘cholesky’, ‘lsqr’, ‘sparse\_cg’, ‘sag’, ‘saga’, ‘lbfgs’}, default=’auto’
Solver to use in the computational routines:
* ‘auto’ chooses the solver automatically based on the type of data.
* ‘svd’ uses a Singular Value Decomposition of X to compute the Ridge coefficients. It is the most stable solver, in particular more stable for singular matrices than ‘cholesky’ at the cost of being slower.
* ‘cholesky’ uses the standard scipy.linalg.solve function to obtain a closed-form solution.
* ‘sparse\_cg’ uses the conjugate gradient solver as found in scipy.sparse.linalg.cg. As an iterative algorithm, this solver is more appropriate than ‘cholesky’ for large-scale data (possibility to set `tol` and `max_iter`).
* ‘lsqr’ uses the dedicated regularized least-squares routine scipy.sparse.linalg.lsqr. It is the fastest and uses an iterative procedure.
* ‘sag’ uses a Stochastic Average Gradient descent, and ‘saga’ uses its improved, unbiased version named SAGA. Both methods also use an iterative procedure, and are often faster than other solvers when both n\_samples and n\_features are large. Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing.
* ‘lbfgs’ uses L-BFGS-B algorithm implemented in `scipy.optimize.minimize`. It can be used only when `positive` is True.
All solvers except ‘svd’ support both dense and sparse data. However, only ‘lsqr’, ‘sag’, ‘sparse\_cg’, and ‘lbfgs’ support sparse input when `fit_intercept` is True.
New in version 0.17: Stochastic Average Gradient descent solver.
New in version 0.19: SAGA solver.
**positive**bool, default=False
When set to `True`, forces the coefficients to be positive. Only ‘lbfgs’ solver is supported in this case.
**random\_state**int, RandomState instance, default=None
Used when `solver` == ‘sag’ or ‘saga’ to shuffle the data. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state) for details.
New in version 0.17: `random_state` to support Stochastic Average Gradient.
Attributes:
**coef\_**ndarray of shape (n\_features,) or (n\_targets, n\_features)
Weight vector(s).
**intercept\_**float or ndarray of shape (n\_targets,)
Independent term in decision function. Set to 0.0 if `fit_intercept = False`.
**n\_iter\_**None or ndarray of shape (n\_targets,)
Actual number of iterations for each target. Available only for sag and lsqr solvers. Other solvers will return None.
New in version 0.17.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`RidgeClassifier`](sklearn.linear_model.ridgeclassifier#sklearn.linear_model.RidgeClassifier "sklearn.linear_model.RidgeClassifier")
Ridge classifier.
[`RidgeCV`](sklearn.linear_model.ridgecv#sklearn.linear_model.RidgeCV "sklearn.linear_model.RidgeCV")
Ridge regression with built-in cross validation.
[`KernelRidge`](sklearn.kernel_ridge.kernelridge#sklearn.kernel_ridge.KernelRidge "sklearn.kernel_ridge.KernelRidge")
Kernel ridge regression combines ridge regression with the kernel trick.
#### Notes
Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to `1 / (2C)` in other linear models such as [`LogisticRegression`](sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression "sklearn.linear_model.LogisticRegression") or [`LinearSVC`](sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC").
#### Examples
```
>>> from sklearn.linear_model import Ridge
>>> import numpy as np
>>> n_samples, n_features = 10, 5
>>> rng = np.random.RandomState(0)
>>> y = rng.randn(n_samples)
>>> X = rng.randn(n_samples, n_features)
>>> clf = Ridge(alpha=1.0)
>>> clf.fit(X, y)
Ridge()
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.linear_model.Ridge.fit "sklearn.linear_model.Ridge.fit")(X, y[, sample\_weight]) | Fit Ridge regression model. |
| [`get_params`](#sklearn.linear_model.Ridge.get_params "sklearn.linear_model.Ridge.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.linear_model.Ridge.predict "sklearn.linear_model.Ridge.predict")(X) | Predict using the linear model. |
| [`score`](#sklearn.linear_model.Ridge.score "sklearn.linear_model.Ridge.score")(X, y[, sample\_weight]) | Return the coefficient of determination of the prediction. |
| [`set_params`](#sklearn.linear_model.Ridge.set_params "sklearn.linear_model.Ridge.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_ridge.py#L1101)
Fit Ridge regression model.
Parameters:
**X**{ndarray, sparse matrix} of shape (n\_samples, n\_features)
Training data.
**y**ndarray of shape (n\_samples,) or (n\_samples, n\_targets)
Target values.
**sample\_weight**float or ndarray of shape (n\_samples,), default=None
Individual weights for each sample. If given a float, every sample will have the same weight.
Returns:
**self**object
Fitted estimator.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L372)
Predict using the linear model.
Parameters:
**X**array-like or sparse matrix, shape (n\_samples, n\_features)
Samples.
Returns:
**C**array, shape (n\_samples,)
Returns predicted values.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L677)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares `((y_true - y_pred)** 2).sum()` and \(v\) is the total sum of squares `((y_true - y_true.mean()) ** 2).sum()`. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of `y`, disregarding the input features, would get a \(R^2\) score of 0.0.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape `(n_samples, n_samples_fitted)`, where `n_samples_fitted` is the number of samples used in the fitting for the estimator.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True values for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
\(R^2\) of `self.predict(X)` wrt. `y`.
#### Notes
The \(R^2\) score used when calling `score` on a regressor uses `multioutput='uniform_average'` from version 0.23 to keep consistent with default value of [`r2_score`](sklearn.metrics.r2_score#sklearn.metrics.r2_score "sklearn.metrics.r2_score"). This influences the `score` method of all the multioutput regressors (except for [`MultiOutputRegressor`](sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor")).
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.linear_model.Ridge`
-------------------------------------------
[Compressive sensing: tomography reconstruction with L1 prior (Lasso)](../../auto_examples/applications/plot_tomography_l1_reconstruction#sphx-glr-auto-examples-applications-plot-tomography-l1-reconstruction-py)
[Prediction Latency](../../auto_examples/applications/plot_prediction_latency#sphx-glr-auto-examples-applications-plot-prediction-latency-py)
[Comparison of kernel ridge and Gaussian process regression](../../auto_examples/gaussian_process/plot_compare_gpr_krr#sphx-glr-auto-examples-gaussian-process-plot-compare-gpr-krr-py)
[HuberRegressor vs Ridge on dataset with strong outliers](../../auto_examples/linear_model/plot_huber_vs_ridge#sphx-glr-auto-examples-linear-model-plot-huber-vs-ridge-py)
[Ordinary Least Squares and Ridge Regression Variance](../../auto_examples/linear_model/plot_ols_ridge_variance#sphx-glr-auto-examples-linear-model-plot-ols-ridge-variance-py)
[Plot Ridge coefficients as a function of the L2 regularization](../../auto_examples/linear_model/plot_ridge_coeffs#sphx-glr-auto-examples-linear-model-plot-ridge-coeffs-py)
[Plot Ridge coefficients as a function of the regularization](../../auto_examples/linear_model/plot_ridge_path#sphx-glr-auto-examples-linear-model-plot-ridge-path-py)
[Poisson regression and non-normal loss](../../auto_examples/linear_model/plot_poisson_regression_non_normal_loss#sphx-glr-auto-examples-linear-model-plot-poisson-regression-non-normal-loss-py)
[Polynomial and Spline interpolation](../../auto_examples/linear_model/plot_polynomial_interpolation#sphx-glr-auto-examples-linear-model-plot-polynomial-interpolation-py)
[Common pitfalls in the interpretation of coefficients of linear models](../../auto_examples/inspection/plot_linear_model_coefficient_interpretation#sphx-glr-auto-examples-inspection-plot-linear-model-coefficient-interpretation-py)
[Imputing missing values with variants of IterativeImputer](../../auto_examples/impute/plot_iterative_imputer_variants_comparison#sphx-glr-auto-examples-impute-plot-iterative-imputer-variants-comparison-py)
| programming_docs |
scikit_learn sklearn.gaussian_process.kernels.RationalQuadratic sklearn.gaussian\_process.kernels.RationalQuadratic
===================================================
*class*sklearn.gaussian\_process.kernels.RationalQuadratic(*length\_scale=1.0*, *alpha=1.0*, *length\_scale\_bounds=(1e-05, 100000.0)*, *alpha\_bounds=(1e-05, 100000.0)*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L1773)
Rational Quadratic kernel.
The RationalQuadratic kernel can be seen as a scale mixture (an infinite sum) of RBF kernels with different characteristic length scales. It is parameterized by a length scale parameter \(l>0\) and a scale mixture parameter \(\alpha>0\). Only the isotropic variant where length\_scale \(l\) is a scalar is supported at the moment. The kernel is given by:
\[k(x\_i, x\_j) = \left( 1 + \frac{d(x\_i, x\_j)^2 }{ 2\alpha l^2}\right)^{-\alpha}\] where \(\alpha\) is the scale mixture parameter, \(l\) is the length scale of the kernel and \(d(\cdot,\cdot)\) is the Euclidean distance. For advice on how to set the parameters, see e.g. [[1]](#rc7764613bdcf-1).
Read more in the [User Guide](../gaussian_process#gp-kernels).
New in version 0.18.
Parameters:
**length\_scale**float > 0, default=1.0
The length scale of the kernel.
**alpha**float > 0, default=1.0
Scale mixture parameter
**length\_scale\_bounds**pair of floats >= 0 or “fixed”, default=(1e-5, 1e5)
The lower and upper bound on ‘length\_scale’. If set to “fixed”, ‘length\_scale’ cannot be changed during hyperparameter tuning.
**alpha\_bounds**pair of floats >= 0 or “fixed”, default=(1e-5, 1e5)
The lower and upper bound on ‘alpha’. If set to “fixed”, ‘alpha’ cannot be changed during hyperparameter tuning.
Attributes:
[`bounds`](#sklearn.gaussian_process.kernels.RationalQuadratic.bounds "sklearn.gaussian_process.kernels.RationalQuadratic.bounds")
Returns the log-transformed bounds on the theta.
**hyperparameter\_alpha**
**hyperparameter\_length\_scale**
[`hyperparameters`](#sklearn.gaussian_process.kernels.RationalQuadratic.hyperparameters "sklearn.gaussian_process.kernels.RationalQuadratic.hyperparameters")
Returns a list of all hyperparameter specifications.
[`n_dims`](#sklearn.gaussian_process.kernels.RationalQuadratic.n_dims "sklearn.gaussian_process.kernels.RationalQuadratic.n_dims")
Returns the number of non-fixed hyperparameters of the kernel.
[`requires_vector_input`](#sklearn.gaussian_process.kernels.RationalQuadratic.requires_vector_input "sklearn.gaussian_process.kernels.RationalQuadratic.requires_vector_input")
Returns whether the kernel is defined on fixed-length feature vectors or generic objects.
[`theta`](#sklearn.gaussian_process.kernels.RationalQuadratic.theta "sklearn.gaussian_process.kernels.RationalQuadratic.theta")
Returns the (flattened, log-transformed) non-fixed hyperparameters.
#### References
[[1](#id1)] [David Duvenaud (2014). “The Kernel Cookbook: Advice on Covariance functions”.](https://www.cs.toronto.edu/~duvenaud/cookbook/)
#### Examples
```
>>> from sklearn.datasets import load_iris
>>> from sklearn.gaussian_process import GaussianProcessClassifier
>>> from sklearn.gaussian_process.kernels import RationalQuadratic
>>> X, y = load_iris(return_X_y=True)
>>> kernel = RationalQuadratic(length_scale=1.0, alpha=1.5)
>>> gpc = GaussianProcessClassifier(kernel=kernel,
... random_state=0).fit(X, y)
>>> gpc.score(X, y)
0.9733...
>>> gpc.predict_proba(X[:2,:])
array([[0.8881..., 0.0566..., 0.05518...],
[0.8678..., 0.0707... , 0.0614...]])
```
#### Methods
| | |
| --- | --- |
| [`__call__`](#sklearn.gaussian_process.kernels.RationalQuadratic.__call__ "sklearn.gaussian_process.kernels.RationalQuadratic.__call__")(X[, Y, eval\_gradient]) | Return the kernel k(X, Y) and optionally its gradient. |
| [`clone_with_theta`](#sklearn.gaussian_process.kernels.RationalQuadratic.clone_with_theta "sklearn.gaussian_process.kernels.RationalQuadratic.clone_with_theta")(theta) | Returns a clone of self with given hyperparameters theta. |
| [`diag`](#sklearn.gaussian_process.kernels.RationalQuadratic.diag "sklearn.gaussian_process.kernels.RationalQuadratic.diag")(X) | Returns the diagonal of the kernel k(X, X). |
| [`get_params`](#sklearn.gaussian_process.kernels.RationalQuadratic.get_params "sklearn.gaussian_process.kernels.RationalQuadratic.get_params")([deep]) | Get parameters of this kernel. |
| [`is_stationary`](#sklearn.gaussian_process.kernels.RationalQuadratic.is_stationary "sklearn.gaussian_process.kernels.RationalQuadratic.is_stationary")() | Returns whether the kernel is stationary. |
| [`set_params`](#sklearn.gaussian_process.kernels.RationalQuadratic.set_params "sklearn.gaussian_process.kernels.RationalQuadratic.set_params")(\*\*params) | Set the parameters of this kernel. |
\_\_call\_\_(*X*, *Y=None*, *eval\_gradient=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L1856)
Return the kernel k(X, Y) and optionally its gradient.
Parameters:
**X**ndarray of shape (n\_samples\_X, n\_features)
Left argument of the returned kernel k(X, Y)
**Y**ndarray of shape (n\_samples\_Y, n\_features), default=None
Right argument of the returned kernel k(X, Y). If None, k(X, X) if evaluated instead.
**eval\_gradient**bool, default=False
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Only supported when Y is None.
Returns:
**K**ndarray of shape (n\_samples\_X, n\_samples\_Y)
Kernel k(X, Y)
**K\_gradient**ndarray of shape (n\_samples\_X, n\_samples\_X, n\_dims)
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when eval\_gradient is True.
*property*bounds
Returns the log-transformed bounds on the theta.
Returns:
**bounds**ndarray of shape (n\_dims, 2)
The log-transformed bounds on the kernel’s hyperparameters theta
clone\_with\_theta(*theta*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L238)
Returns a clone of self with given hyperparameters theta.
Parameters:
**theta**ndarray of shape (n\_dims,)
The hyperparameters
diag(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L448)
Returns the diagonal of the kernel k(X, X).
The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated.
Parameters:
**X**ndarray of shape (n\_samples\_X, n\_features)
Left argument of the returned kernel k(X, Y)
Returns:
**K\_diag**ndarray of shape (n\_samples\_X,)
Diagonal of kernel k(X, X)
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L158)
Get parameters of this kernel.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
*property*hyperparameters
Returns a list of all hyperparameter specifications.
is\_stationary()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L474)
Returns whether the kernel is stationary.
*property*n\_dims
Returns the number of non-fixed hyperparameters of the kernel.
*property*requires\_vector\_input
Returns whether the kernel is defined on fixed-length feature vectors or generic objects. Defaults to True for backward compatibility.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L198)
Set the parameters of this kernel.
The method works on simple kernels as well as on nested kernels. The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Returns:
self
*property*theta
Returns the (flattened, log-transformed) non-fixed hyperparameters.
Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale.
Returns:
**theta**ndarray of shape (n\_dims,)
The non-fixed, log-transformed hyperparameters of the kernel
Examples using `sklearn.gaussian_process.kernels.RationalQuadratic`
-------------------------------------------------------------------
[Gaussian process regression (GPR) on Mauna Loa CO2 data](../../auto_examples/gaussian_process/plot_gpr_co2#sphx-glr-auto-examples-gaussian-process-plot-gpr-co2-py)
[Illustration of prior and posterior Gaussian process for different kernels](../../auto_examples/gaussian_process/plot_gpr_prior_posterior#sphx-glr-auto-examples-gaussian-process-plot-gpr-prior-posterior-py)
scikit_learn sklearn.ensemble.ExtraTreesRegressor sklearn.ensemble.ExtraTreesRegressor
====================================
*class*sklearn.ensemble.ExtraTreesRegressor(*n\_estimators=100*, *\**, *criterion='squared\_error'*, *max\_depth=None*, *min\_samples\_split=2*, *min\_samples\_leaf=1*, *min\_weight\_fraction\_leaf=0.0*, *max\_features=1.0*, *max\_leaf\_nodes=None*, *min\_impurity\_decrease=0.0*, *bootstrap=False*, *oob\_score=False*, *n\_jobs=None*, *random\_state=None*, *verbose=0*, *warm\_start=False*, *ccp\_alpha=0.0*, *max\_samples=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/ensemble/_forest.py#L2102)
An extra-trees regressor.
This class implements a meta estimator that fits a number of randomized decision trees (a.k.a. extra-trees) on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting.
Read more in the [User Guide](../ensemble#forest).
Parameters:
**n\_estimators**int, default=100
The number of trees in the forest.
Changed in version 0.22: The default value of `n_estimators` changed from 10 to 100 in 0.22.
**criterion**{“squared\_error”, “absolute\_error”}, default=”squared\_error”
The function to measure the quality of a split. Supported criteria are “squared\_error” for the mean squared error, which is equal to variance reduction as feature selection criterion, and “absolute\_error” for the mean absolute error.
New in version 0.18: Mean Absolute Error (MAE) criterion.
Deprecated since version 1.0: Criterion “mse” was deprecated in v1.0 and will be removed in version 1.2. Use `criterion="squared_error"` which is equivalent.
Deprecated since version 1.0: Criterion “mae” was deprecated in v1.0 and will be removed in version 1.2. Use `criterion="absolute_error"` which is equivalent.
**max\_depth**int, default=None
The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min\_samples\_split samples.
**min\_samples\_split**int or float, default=2
The minimum number of samples required to split an internal node:
* If int, then consider `min_samples_split` as the minimum number.
* If float, then `min_samples_split` is a fraction and `ceil(min_samples_split * n_samples)` are the minimum number of samples for each split.
Changed in version 0.18: Added float values for fractions.
**min\_samples\_leaf**int or float, default=1
The minimum number of samples required to be at a leaf node. A split point at any depth will only be considered if it leaves at least `min_samples_leaf` training samples in each of the left and right branches. This may have the effect of smoothing the model, especially in regression.
* If int, then consider `min_samples_leaf` as the minimum number.
* If float, then `min_samples_leaf` is a fraction and `ceil(min_samples_leaf * n_samples)` are the minimum number of samples for each node.
Changed in version 0.18: Added float values for fractions.
**min\_weight\_fraction\_leaf**float, default=0.0
The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample\_weight is not provided.
**max\_features**{“sqrt”, “log2”, None}, int or float, default=1.0
The number of features to consider when looking for the best split:
* If int, then consider `max_features` features at each split.
* If float, then `max_features` is a fraction and `max(1, int(max_features * n_features_in_))` features are considered at each split.
* If “auto”, then `max_features=n_features`.
* If “sqrt”, then `max_features=sqrt(n_features)`.
* If “log2”, then `max_features=log2(n_features)`.
* If None or 1.0, then `max_features=n_features`.
Note
The default of 1.0 is equivalent to bagged trees and more randomness can be achieved by setting smaller values, e.g. 0.3.
Changed in version 1.1: The default of `max_features` changed from `"auto"` to 1.0.
Deprecated since version 1.1: The `"auto"` option was deprecated in 1.1 and will be removed in 1.3.
Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than `max_features` features.
**max\_leaf\_nodes**int, default=None
Grow trees with `max_leaf_nodes` in best-first fashion. Best nodes are defined as relative reduction in impurity. If None then unlimited number of leaf nodes.
**min\_impurity\_decrease**float, default=0.0
A node will be split if this split induces a decrease of the impurity greater than or equal to this value.
The weighted impurity decrease equation is the following:
```
N_t / N * (impurity - N_t_R / N_t * right_impurity
- N_t_L / N_t * left_impurity)
```
where `N` is the total number of samples, `N_t` is the number of samples at the current node, `N_t_L` is the number of samples in the left child, and `N_t_R` is the number of samples in the right child.
`N`, `N_t`, `N_t_R` and `N_t_L` all refer to the weighted sum, if `sample_weight` is passed.
New in version 0.19.
**bootstrap**bool, default=False
Whether bootstrap samples are used when building trees. If False, the whole dataset is used to build each tree.
**oob\_score**bool, default=False
Whether to use out-of-bag samples to estimate the generalization score. Only available if bootstrap=True.
**n\_jobs**int, default=None
The number of jobs to run in parallel. [`fit`](#sklearn.ensemble.ExtraTreesRegressor.fit "sklearn.ensemble.ExtraTreesRegressor.fit"), [`predict`](#sklearn.ensemble.ExtraTreesRegressor.predict "sklearn.ensemble.ExtraTreesRegressor.predict"), [`decision_path`](#sklearn.ensemble.ExtraTreesRegressor.decision_path "sklearn.ensemble.ExtraTreesRegressor.decision_path") and [`apply`](#sklearn.ensemble.ExtraTreesRegressor.apply "sklearn.ensemble.ExtraTreesRegressor.apply") are all parallelized over the trees. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
**random\_state**int, RandomState instance or None, default=None
Controls 3 sources of randomness:
* the bootstrapping of the samples used when building trees (if `bootstrap=True`)
* the sampling of the features to consider when looking for the best split at each node (if `max_features < n_features`)
* the draw of the splits for each of the `max_features`
See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state) for details.
**verbose**int, default=0
Controls the verbosity when fitting and predicting.
**warm\_start**bool, default=False
When set to `True`, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new forest. See [the Glossary](https://scikit-learn.org/1.1/glossary.html#term-warm_start).
**ccp\_alpha**non-negative float, default=0.0
Complexity parameter used for Minimal Cost-Complexity Pruning. The subtree with the largest cost complexity that is smaller than `ccp_alpha` will be chosen. By default, no pruning is performed. See [Minimal Cost-Complexity Pruning](../tree#minimal-cost-complexity-pruning) for details.
New in version 0.22.
**max\_samples**int or float, default=None
If bootstrap is True, the number of samples to draw from X to train each base estimator.
* If None (default), then draw `X.shape[0]` samples.
* If int, then draw `max_samples` samples.
* If float, then draw `max_samples * X.shape[0]` samples. Thus, `max_samples` should be in the interval `(0.0, 1.0]`.
New in version 0.22.
Attributes:
**base\_estimator\_**ExtraTreeRegressor
The child estimator template used to create the collection of fitted sub-estimators.
**estimators\_**list of DecisionTreeRegressor
The collection of fitted sub-estimators.
[`feature_importances_`](#sklearn.ensemble.ExtraTreesRegressor.feature_importances_ "sklearn.ensemble.ExtraTreesRegressor.feature_importances_")ndarray of shape (n\_features,)
The impurity-based feature importances.
[`n_features_`](#sklearn.ensemble.ExtraTreesRegressor.n_features_ "sklearn.ensemble.ExtraTreesRegressor.n_features_")int
DEPRECATED: Attribute `n_features_` was deprecated in version 1.0 and will be removed in 1.2.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
**n\_outputs\_**int
The number of outputs.
**oob\_score\_**float
Score of the training dataset obtained using an out-of-bag estimate. This attribute exists only when `oob_score` is True.
**oob\_prediction\_**ndarray of shape (n\_samples,) or (n\_samples, n\_outputs)
Prediction computed with out-of-bag estimate on the training set. This attribute exists only when `oob_score` is True.
See also
[`ExtraTreesClassifier`](sklearn.ensemble.extratreesclassifier#sklearn.ensemble.ExtraTreesClassifier "sklearn.ensemble.ExtraTreesClassifier")
An extra-trees classifier with random splits.
[`RandomForestClassifier`](sklearn.ensemble.randomforestclassifier#sklearn.ensemble.RandomForestClassifier "sklearn.ensemble.RandomForestClassifier")
A random forest classifier with optimal splits.
[`RandomForestRegressor`](sklearn.ensemble.randomforestregressor#sklearn.ensemble.RandomForestRegressor "sklearn.ensemble.RandomForestRegressor")
Ensemble regressor using trees with optimal splits.
#### Notes
The default values for the parameters controlling the size of the trees (e.g. `max_depth`, `min_samples_leaf`, etc.) lead to fully grown and unpruned trees which can potentially be very large on some data sets. To reduce memory consumption, the complexity and size of the trees should be controlled by setting those parameter values.
#### References
[1] P. Geurts, D. Ernst., and L. Wehenkel, “Extremely randomized trees”, Machine Learning, 63(1), 3-42, 2006.
#### Examples
```
>>> from sklearn.datasets import load_diabetes
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.ensemble import ExtraTreesRegressor
>>> X, y = load_diabetes(return_X_y=True)
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, random_state=0)
>>> reg = ExtraTreesRegressor(n_estimators=100, random_state=0).fit(
... X_train, y_train)
>>> reg.score(X_test, y_test)
0.2727...
```
#### Methods
| | |
| --- | --- |
| [`apply`](#sklearn.ensemble.ExtraTreesRegressor.apply "sklearn.ensemble.ExtraTreesRegressor.apply")(X) | Apply trees in the forest to X, return leaf indices. |
| [`decision_path`](#sklearn.ensemble.ExtraTreesRegressor.decision_path "sklearn.ensemble.ExtraTreesRegressor.decision_path")(X) | Return the decision path in the forest. |
| [`fit`](#sklearn.ensemble.ExtraTreesRegressor.fit "sklearn.ensemble.ExtraTreesRegressor.fit")(X, y[, sample\_weight]) | Build a forest of trees from the training set (X, y). |
| [`get_params`](#sklearn.ensemble.ExtraTreesRegressor.get_params "sklearn.ensemble.ExtraTreesRegressor.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.ensemble.ExtraTreesRegressor.predict "sklearn.ensemble.ExtraTreesRegressor.predict")(X) | Predict regression target for X. |
| [`score`](#sklearn.ensemble.ExtraTreesRegressor.score "sklearn.ensemble.ExtraTreesRegressor.score")(X, y[, sample\_weight]) | Return the coefficient of determination of the prediction. |
| [`set_params`](#sklearn.ensemble.ExtraTreesRegressor.set_params "sklearn.ensemble.ExtraTreesRegressor.set_params")(\*\*params) | Set the parameters of this estimator. |
apply(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/ensemble/_forest.py#L235)
Apply trees in the forest to X, return leaf indices.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The input samples. Internally, its dtype will be converted to `dtype=np.float32`. If a sparse matrix is provided, it will be converted into a sparse `csr_matrix`.
Returns:
**X\_leaves**ndarray of shape (n\_samples, n\_estimators)
For each datapoint x in X and for each tree in the forest, return the index of the leaf x ends up in.
decision\_path(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/ensemble/_forest.py#L261)
Return the decision path in the forest.
New in version 0.18.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The input samples. Internally, its dtype will be converted to `dtype=np.float32`. If a sparse matrix is provided, it will be converted into a sparse `csr_matrix`.
Returns:
**indicator**sparse matrix of shape (n\_samples, n\_nodes)
Return a node indicator matrix where non zero elements indicates that the samples goes through the nodes. The matrix is of CSR format.
**n\_nodes\_ptr**ndarray of shape (n\_estimators + 1,)
The columns from indicator[n\_nodes\_ptr[i]:n\_nodes\_ptr[i+1]] gives the indicator value for the i-th estimator.
*property*feature\_importances\_
The impurity-based feature importances.
The higher, the more important the feature. The importance of a feature is computed as the (normalized) total reduction of the criterion brought by that feature. It is also known as the Gini importance.
Warning: impurity-based feature importances can be misleading for high cardinality features (many unique values). See [`sklearn.inspection.permutation_importance`](sklearn.inspection.permutation_importance#sklearn.inspection.permutation_importance "sklearn.inspection.permutation_importance") as an alternative.
Returns:
**feature\_importances\_**ndarray of shape (n\_features,)
The values of this array sum to 1, unless all trees are single node trees consisting of only the root node, in which case it will be an array of zeros.
fit(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/ensemble/_forest.py#L301)
Build a forest of trees from the training set (X, y).
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The training input samples. Internally, its dtype will be converted to `dtype=np.float32`. If a sparse matrix is provided, it will be converted into a sparse `csc_matrix`.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
The target values (class labels in classification, real numbers in regression).
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights. If None, then samples are equally weighted. Splits that would create child nodes with net zero or negative weight are ignored while searching for a split in each node. In the case of classification, splits are also ignored if they would result in any single class carrying a negative weight in either child node.
Returns:
**self**object
Fitted estimator.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
*property*n\_features\_
DEPRECATED: Attribute `n_features_` was deprecated in version 1.0 and will be removed in 1.2. Use `n_features_in_` instead.
Number of features when fitting the estimator.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/ensemble/_forest.py#L970)
Predict regression target for X.
The predicted regression target of an input sample is computed as the mean predicted regression targets of the trees in the forest.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The input samples. Internally, its dtype will be converted to `dtype=np.float32`. If a sparse matrix is provided, it will be converted into a sparse `csr_matrix`.
Returns:
**y**ndarray of shape (n\_samples,) or (n\_samples, n\_outputs)
The predicted values.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L677)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares `((y_true - y_pred)** 2).sum()` and \(v\) is the total sum of squares `((y_true - y_true.mean()) ** 2).sum()`. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of `y`, disregarding the input features, would get a \(R^2\) score of 0.0.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape `(n_samples, n_samples_fitted)`, where `n_samples_fitted` is the number of samples used in the fitting for the estimator.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True values for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
\(R^2\) of `self.predict(X)` wrt. `y`.
#### Notes
The \(R^2\) score used when calling `score` on a regressor uses `multioutput='uniform_average'` from version 0.23 to keep consistent with default value of [`r2_score`](sklearn.metrics.r2_score#sklearn.metrics.r2_score "sklearn.metrics.r2_score"). This influences the `score` method of all the multioutput regressors (except for [`MultiOutputRegressor`](sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor")).
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.ensemble.ExtraTreesRegressor`
-----------------------------------------------------
[Face completion with a multi-output estimators](../../auto_examples/miscellaneous/plot_multioutput_face_completion#sphx-glr-auto-examples-miscellaneous-plot-multioutput-face-completion-py)
| programming_docs |
scikit_learn sklearn.metrics.balanced_accuracy_score sklearn.metrics.balanced\_accuracy\_score
=========================================
sklearn.metrics.balanced\_accuracy\_score(*y\_true*, *y\_pred*, *\**, *sample\_weight=None*, *adjusted=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/_classification.py#L1933)
Compute the balanced accuracy.
The balanced accuracy in binary and multiclass classification problems to deal with imbalanced datasets. It is defined as the average of recall obtained on each class.
The best value is 1 and the worst value is 0 when `adjusted=False`.
Read more in the [User Guide](../model_evaluation#balanced-accuracy-score).
New in version 0.20.
Parameters:
**y\_true**1d array-like
Ground truth (correct) target values.
**y\_pred**1d array-like
Estimated targets as returned by a classifier.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
**adjusted**bool, default=False
When true, the result is adjusted for chance, so that random performance would score 0, while keeping perfect performance at a score of 1.
Returns:
**balanced\_accuracy**float
Balanced accuracy score.
See also
[`average_precision_score`](sklearn.metrics.average_precision_score#sklearn.metrics.average_precision_score "sklearn.metrics.average_precision_score")
Compute average precision (AP) from prediction scores.
[`precision_score`](sklearn.metrics.precision_score#sklearn.metrics.precision_score "sklearn.metrics.precision_score")
Compute the precision score.
[`recall_score`](sklearn.metrics.recall_score#sklearn.metrics.recall_score "sklearn.metrics.recall_score")
Compute the recall score.
[`roc_auc_score`](sklearn.metrics.roc_auc_score#sklearn.metrics.roc_auc_score "sklearn.metrics.roc_auc_score")
Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores.
#### Notes
Some literature promotes alternative definitions of balanced accuracy. Our definition is equivalent to [`accuracy_score`](sklearn.metrics.accuracy_score#sklearn.metrics.accuracy_score "sklearn.metrics.accuracy_score") with class-balanced sample weights, and shares desirable properties with the binary case. See the [User Guide](../model_evaluation#balanced-accuracy-score).
#### References
[1] Brodersen, K.H.; Ong, C.S.; Stephan, K.E.; Buhmann, J.M. (2010). The balanced accuracy and its posterior distribution. Proceedings of the 20th International Conference on Pattern Recognition, 3121-24.
[2] John. D. Kelleher, Brian Mac Namee, Aoife D’Arcy, (2015). [Fundamentals of Machine Learning for Predictive Data Analytics: Algorithms, Worked Examples, and Case Studies](https://mitpress.mit.edu/books/fundamentals-machine-learning-predictive-data-analytics).
#### Examples
```
>>> from sklearn.metrics import balanced_accuracy_score
>>> y_true = [0, 1, 0, 0, 1, 0]
>>> y_pred = [0, 1, 0, 0, 0, 1]
>>> balanced_accuracy_score(y_true, y_pred)
0.625
```
scikit_learn sklearn.linear_model.RidgeClassifierCV sklearn.linear\_model.RidgeClassifierCV
=======================================
*class*sklearn.linear\_model.RidgeClassifierCV(*alphas=(0.1, 1.0, 10.0)*, *\**, *fit\_intercept=True*, *normalize='deprecated'*, *scoring=None*, *cv=None*, *class\_weight=None*, *store\_cv\_values=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_ridge.py#L2351)
Ridge classifier with built-in cross-validation.
See glossary entry for [cross-validation estimator](https://scikit-learn.org/1.1/glossary.html#term-cross-validation-estimator).
By default, it performs Leave-One-Out Cross-Validation. Currently, only the n\_features > n\_samples case is handled efficiently.
Read more in the [User Guide](../linear_model#ridge-regression).
Parameters:
**alphas**ndarray of shape (n\_alphas,), default=(0.1, 1.0, 10.0)
Array of alpha values to try. Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to `1 / (2C)` in other linear models such as [`LogisticRegression`](sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression "sklearn.linear_model.LogisticRegression") or [`LinearSVC`](sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC").
**fit\_intercept**bool, default=True
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
**normalize**bool, default=False
This parameter is ignored when `fit_intercept` is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use [`StandardScaler`](sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler") before calling `fit` on an estimator with `normalize=False`.
Deprecated since version 1.0: `normalize` was deprecated in version 1.0 and will be removed in 1.2.
**scoring**str, callable, default=None
A string (see model evaluation documentation) or a scorer callable object / function with signature `scorer(estimator, X, y)`.
**cv**int, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are:
* None, to use the efficient Leave-One-Out cross-validation
* integer, to specify the number of folds.
* [CV splitter](https://scikit-learn.org/1.1/glossary.html#term-CV-splitter),
* An iterable yielding (train, test) splits as arrays of indices.
Refer [User Guide](../cross_validation#cross-validation) for the various cross-validation strategies that can be used here.
**class\_weight**dict or ‘balanced’, default=None
Weights associated with classes in the form `{class_label: weight}`. If not given, all classes are supposed to have weight one.
The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as `n_samples / (n_classes * np.bincount(y))`.
**store\_cv\_values**bool, default=False
Flag indicating if the cross-validation values corresponding to each alpha should be stored in the `cv_values_` attribute (see below). This flag is only compatible with `cv=None` (i.e. using Leave-One-Out Cross-Validation).
Attributes:
**cv\_values\_**ndarray of shape (n\_samples, n\_targets, n\_alphas), optional
Cross-validation values for each alpha (only if `store_cv_values=True` and `cv=None`). After `fit()` has been called, this attribute will contain the mean squared errors if `scoring is None` otherwise it will contain standardized per point prediction values.
**coef\_**ndarray of shape (1, n\_features) or (n\_targets, n\_features)
Coefficient of the features in the decision function.
`coef_` is of shape (1, n\_features) when the given problem is binary.
**intercept\_**float or ndarray of shape (n\_targets,)
Independent term in decision function. Set to 0.0 if `fit_intercept = False`.
**alpha\_**float
Estimated regularization parameter.
**best\_score\_**float
Score of base estimator with best alpha.
New in version 0.23.
[`classes_`](#sklearn.linear_model.RidgeClassifierCV.classes_ "sklearn.linear_model.RidgeClassifierCV.classes_")ndarray of shape (n\_classes,)
Classes labels.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`Ridge`](sklearn.linear_model.ridge#sklearn.linear_model.Ridge "sklearn.linear_model.Ridge")
Ridge regression.
[`RidgeClassifier`](sklearn.linear_model.ridgeclassifier#sklearn.linear_model.RidgeClassifier "sklearn.linear_model.RidgeClassifier")
Ridge classifier.
[`RidgeCV`](sklearn.linear_model.ridgecv#sklearn.linear_model.RidgeCV "sklearn.linear_model.RidgeCV")
Ridge regression with built-in cross validation.
#### Notes
For multi-class classification, n\_class classifiers are trained in a one-versus-all approach. Concretely, this is implemented by taking advantage of the multi-variate response support in Ridge.
#### Examples
```
>>> from sklearn.datasets import load_breast_cancer
>>> from sklearn.linear_model import RidgeClassifierCV
>>> X, y = load_breast_cancer(return_X_y=True)
>>> clf = RidgeClassifierCV(alphas=[1e-3, 1e-2, 1e-1, 1]).fit(X, y)
>>> clf.score(X, y)
0.9630...
```
#### Methods
| | |
| --- | --- |
| [`decision_function`](#sklearn.linear_model.RidgeClassifierCV.decision_function "sklearn.linear_model.RidgeClassifierCV.decision_function")(X) | Predict confidence scores for samples. |
| [`fit`](#sklearn.linear_model.RidgeClassifierCV.fit "sklearn.linear_model.RidgeClassifierCV.fit")(X, y[, sample\_weight]) | Fit Ridge classifier with cv. |
| [`get_params`](#sklearn.linear_model.RidgeClassifierCV.get_params "sklearn.linear_model.RidgeClassifierCV.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.linear_model.RidgeClassifierCV.predict "sklearn.linear_model.RidgeClassifierCV.predict")(X) | Predict class labels for samples in `X`. |
| [`score`](#sklearn.linear_model.RidgeClassifierCV.score "sklearn.linear_model.RidgeClassifierCV.score")(X, y[, sample\_weight]) | Return the mean accuracy on the given test data and labels. |
| [`set_params`](#sklearn.linear_model.RidgeClassifierCV.set_params "sklearn.linear_model.RidgeClassifierCV.set_params")(\*\*params) | Set the parameters of this estimator. |
*property*classes\_
Classes labels.
decision\_function(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L408)
Predict confidence scores for samples.
The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The data matrix for which we want to get the confidence scores.
Returns:
**scores**ndarray of shape (n\_samples,) or (n\_samples, n\_classes)
Confidence scores per `(n_samples, n_classes)` combination. In the binary case, confidence score for `self.classes_[1]` where >0 means this class would be predicted.
fit(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_ridge.py#L2502)
Fit Ridge classifier with cv.
Parameters:
**X**ndarray of shape (n\_samples, n\_features)
Training vectors, where `n_samples` is the number of samples and `n_features` is the number of features. When using GCV, will be cast to float64 if necessary.
**y**ndarray of shape (n\_samples,)
Target values. Will be cast to X’s dtype if necessary.
**sample\_weight**float or ndarray of shape (n\_samples,), default=None
Individual weights for each sample. If given a float, every sample will have the same weight.
Returns:
**self**object
Fitted estimator.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_ridge.py#L1185)
Predict class labels for samples in `X`.
Parameters:
**X**{array-like, spare matrix} of shape (n\_samples, n\_features)
The data matrix for which we want to predict the targets.
Returns:
**y\_pred**ndarray of shape (n\_samples,) or (n\_samples, n\_outputs)
Vector or matrix containing the predictions. In binary and multiclass problems, this is a vector containing `n_samples`. In a multilabel problem, it returns a matrix of shape `(n_samples, n_outputs)`.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L640)
Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True labels for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
Mean accuracy of `self.predict(X)` wrt. `y`.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
scikit_learn sklearn.preprocessing.MultiLabelBinarizer sklearn.preprocessing.MultiLabelBinarizer
=========================================
*class*sklearn.preprocessing.MultiLabelBinarizer(*\**, *classes=None*, *sparse\_output=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_label.py#L674)
Transform between iterable of iterables and a multilabel format.
Although a list of sets or tuples is a very intuitive format for multilabel data, it is unwieldy to process. This transformer converts between this intuitive format and the supported multilabel format: a (samples x classes) binary matrix indicating the presence of a class label.
Parameters:
**classes**array-like of shape (n\_classes,), default=None
Indicates an ordering for the class labels. All entries should be unique (cannot contain duplicate classes).
**sparse\_output**bool, default=False
Set to True if output binary array is desired in CSR sparse format.
Attributes:
**classes\_**ndarray of shape (n\_classes,)
A copy of the `classes` parameter when provided. Otherwise it corresponds to the sorted set of classes found when fitting.
See also
[`OneHotEncoder`](sklearn.preprocessing.onehotencoder#sklearn.preprocessing.OneHotEncoder "sklearn.preprocessing.OneHotEncoder")
Encode categorical features using a one-hot aka one-of-K scheme.
#### Examples
```
>>> from sklearn.preprocessing import MultiLabelBinarizer
>>> mlb = MultiLabelBinarizer()
>>> mlb.fit_transform([(1, 2), (3,)])
array([[1, 1, 0],
[0, 0, 1]])
>>> mlb.classes_
array([1, 2, 3])
```
```
>>> mlb.fit_transform([{'sci-fi', 'thriller'}, {'comedy'}])
array([[0, 1, 1],
[1, 0, 0]])
>>> list(mlb.classes_)
['comedy', 'sci-fi', 'thriller']
```
A common mistake is to pass in a list, which leads to the following issue:
```
>>> mlb = MultiLabelBinarizer()
>>> mlb.fit(['sci-fi', 'thriller', 'comedy'])
MultiLabelBinarizer()
>>> mlb.classes_
array(['-', 'c', 'd', 'e', 'f', 'h', 'i', 'l', 'm', 'o', 'r', 's', 't',
'y'], dtype=object)
```
To correct this, the list of labels should be passed in as:
```
>>> mlb = MultiLabelBinarizer()
>>> mlb.fit([['sci-fi', 'thriller', 'comedy']])
MultiLabelBinarizer()
>>> mlb.classes_
array(['comedy', 'sci-fi', 'thriller'], dtype=object)
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.preprocessing.MultiLabelBinarizer.fit "sklearn.preprocessing.MultiLabelBinarizer.fit")(y) | Fit the label sets binarizer, storing [classes\_](https://scikit-learn.org/1.1/glossary.html#term-classes_). |
| [`fit_transform`](#sklearn.preprocessing.MultiLabelBinarizer.fit_transform "sklearn.preprocessing.MultiLabelBinarizer.fit_transform")(y) | Fit the label sets binarizer and transform the given label sets. |
| [`get_params`](#sklearn.preprocessing.MultiLabelBinarizer.get_params "sklearn.preprocessing.MultiLabelBinarizer.get_params")([deep]) | Get parameters for this estimator. |
| [`inverse_transform`](#sklearn.preprocessing.MultiLabelBinarizer.inverse_transform "sklearn.preprocessing.MultiLabelBinarizer.inverse_transform")(yt) | Transform the given indicator matrix into label sets. |
| [`set_params`](#sklearn.preprocessing.MultiLabelBinarizer.set_params "sklearn.preprocessing.MultiLabelBinarizer.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.preprocessing.MultiLabelBinarizer.transform "sklearn.preprocessing.MultiLabelBinarizer.transform")(y) | Transform the given label sets. |
fit(*y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_label.py#L741)
Fit the label sets binarizer, storing [classes\_](https://scikit-learn.org/1.1/glossary.html#term-classes_).
Parameters:
**y**iterable of iterables
A set of labels (any orderable and hashable object) for each sample. If the `classes` parameter is set, `y` will not be iterated.
Returns:
**self**object
Fitted estimator.
fit\_transform(*y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_label.py#L772)
Fit the label sets binarizer and transform the given label sets.
Parameters:
**y**iterable of iterables
A set of labels (any orderable and hashable object) for each sample. If the `classes` parameter is set, `y` will not be iterated.
Returns:
**y\_indicator**{ndarray, sparse matrix} of shape (n\_samples, n\_classes)
A matrix such that `y_indicator[i, j] = 1` iff `classes_[j]` is in `y[i]`, and 0 otherwise. Sparse matrix will be of CSR format.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
inverse\_transform(*yt*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_label.py#L887)
Transform the given indicator matrix into label sets.
Parameters:
**yt**{ndarray, sparse matrix} of shape (n\_samples, n\_classes)
A matrix containing only 1s ands 0s.
Returns:
**y**list of tuples
The set of labels for each sample such that `y[i]` consists of `classes_[j]` for each `yt[i, j] == 1`.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_label.py#L815)
Transform the given label sets.
Parameters:
**y**iterable of iterables
A set of labels (any orderable and hashable object) for each sample. If the `classes` parameter is set, `y` will not be iterated.
Returns:
**y\_indicator**array or CSR matrix, shape (n\_samples, n\_classes)
A matrix such that `y_indicator[i, j] = 1` iff `classes_[j]` is in `y[i]`, and 0 otherwise.
| programming_docs |
scikit_learn sklearn.utils.sparsefuncs.inplace_csr_column_scale sklearn.utils.sparsefuncs.inplace\_csr\_column\_scale
=====================================================
sklearn.utils.sparsefuncs.inplace\_csr\_column\_scale(*X*, *scale*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/utils/sparsefuncs.py#L31)
Inplace column scaling of a CSR matrix.
Scale each feature of the data matrix by multiplying with specific scale provided by the caller assuming a (n\_samples, n\_features) shape.
Parameters:
**X**sparse matrix of shape (n\_samples, n\_features)
Matrix to normalize using the variance of the features. It should be of CSR format.
**scale**ndarray of shape (n\_features,), dtype={np.float32, np.float64}
Array of precomputed feature-wise values to use for scaling.
scikit_learn sklearn.calibration.calibration_curve sklearn.calibration.calibration\_curve
======================================
sklearn.calibration.calibration\_curve(*y\_true*, *y\_prob*, *\**, *pos\_label=None*, *normalize='deprecated'*, *n\_bins=5*, *strategy='uniform'*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/calibration.py#L873)
Compute true and predicted probabilities for a calibration curve.
The method assumes the inputs come from a binary classifier, and discretize the [0, 1] interval into bins.
Calibration curves may also be referred to as reliability diagrams.
Read more in the [User Guide](../calibration#calibration).
Parameters:
**y\_true**array-like of shape (n\_samples,)
True targets.
**y\_prob**array-like of shape (n\_samples,)
Probabilities of the positive class.
**pos\_label**int or str, default=None
The label of the positive class.
New in version 1.1.
**normalize**bool, default=”deprecated”
Whether y\_prob needs to be normalized into the [0, 1] interval, i.e. is not a proper probability. If True, the smallest value in y\_prob is linearly mapped onto 0 and the largest one onto 1.
Deprecated since version 1.1: The normalize argument is deprecated in v1.1 and will be removed in v1.3. Explicitly normalizing `y_prob` will reproduce this behavior, but it is recommended that a proper probability is used (i.e. a classifier’s `predict_proba` positive class).
**n\_bins**int, default=5
Number of bins to discretize the [0, 1] interval. A bigger number requires more data. Bins with no samples (i.e. without corresponding values in `y_prob`) will not be returned, thus the returned arrays may have less than `n_bins` values.
**strategy**{‘uniform’, ‘quantile’}, default=’uniform’
Strategy used to define the widths of the bins.
uniform
The bins have identical widths.
quantile
The bins have the same number of samples and depend on `y_prob`.
Returns:
**prob\_true**ndarray of shape (n\_bins,) or smaller
The proportion of samples whose class is the positive class, in each bin (fraction of positives).
**prob\_pred**ndarray of shape (n\_bins,) or smaller
The mean predicted probability in each bin.
#### References
Alexandru Niculescu-Mizil and Rich Caruana (2005) Predicting Good Probabilities With Supervised Learning, in Proceedings of the 22nd International Conference on Machine Learning (ICML). See section 4 (Qualitative Analysis of Predictions).
#### Examples
```
>>> import numpy as np
>>> from sklearn.calibration import calibration_curve
>>> y_true = np.array([0, 0, 0, 0, 1, 1, 1, 1, 1])
>>> y_pred = np.array([0.1, 0.2, 0.3, 0.4, 0.65, 0.7, 0.8, 0.9, 1.])
>>> prob_true, prob_pred = calibration_curve(y_true, y_pred, n_bins=3)
>>> prob_true
array([0. , 0.5, 1. ])
>>> prob_pred
array([0.2 , 0.525, 0.85 ])
```
scikit_learn sklearn.preprocessing.PowerTransformer sklearn.preprocessing.PowerTransformer
======================================
*class*sklearn.preprocessing.PowerTransformer(*method='yeo-johnson'*, *\**, *standardize=True*, *copy=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_data.py#L2919)
Apply a power transform featurewise to make data more Gaussian-like.
Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired.
Currently, PowerTransformer supports the Box-Cox transform and the Yeo-Johnson transform. The optimal parameter for stabilizing variance and minimizing skewness is estimated through maximum likelihood.
Box-Cox requires input data to be strictly positive, while Yeo-Johnson supports both positive or negative data.
By default, zero-mean, unit-variance normalization is applied to the transformed data.
Read more in the [User Guide](../preprocessing#preprocessing-transformer).
New in version 0.20.
Parameters:
**method**{‘yeo-johnson’, ‘box-cox’}, default=’yeo-johnson’
The power transform method. Available methods are:
* ‘yeo-johnson’ [[1]](#rf3e1504535de-1), works with positive and negative values
* ‘box-cox’ [[2]](#rf3e1504535de-2), only works with strictly positive values
**standardize**bool, default=True
Set to True to apply zero-mean, unit-variance normalization to the transformed output.
**copy**bool, default=True
Set to False to perform inplace computation during transformation.
Attributes:
**lambdas\_**ndarray of float of shape (n\_features,)
The parameters of the power transformation for the selected features.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`power_transform`](sklearn.preprocessing.power_transform#sklearn.preprocessing.power_transform "sklearn.preprocessing.power_transform")
Equivalent function without the estimator API.
[`QuantileTransformer`](sklearn.preprocessing.quantiletransformer#sklearn.preprocessing.QuantileTransformer "sklearn.preprocessing.QuantileTransformer")
Maps data to a standard normal distribution with the parameter `output_distribution='normal'`.
#### Notes
NaNs are treated as missing values: disregarded in `fit`, and maintained in `transform`.
For a comparison of the different scalers, transformers, and normalizers, see [examples/preprocessing/plot\_all\_scaling.py](../../auto_examples/preprocessing/plot_all_scaling#sphx-glr-auto-examples-preprocessing-plot-all-scaling-py).
#### References
[[1](#id1)] I.K. Yeo and R.A. Johnson, “A new family of power transformations to improve normality or symmetry.” Biometrika, 87(4), pp.954-959, (2000).
[[2](#id2)] G.E.P. Box and D.R. Cox, “An Analysis of Transformations”, Journal of the Royal Statistical Society B, 26, 211-252 (1964).
#### Examples
```
>>> import numpy as np
>>> from sklearn.preprocessing import PowerTransformer
>>> pt = PowerTransformer()
>>> data = [[1, 2], [3, 2], [4, 5]]
>>> print(pt.fit(data))
PowerTransformer()
>>> print(pt.lambdas_)
[ 1.386... -3.100...]
>>> print(pt.transform(data))
[[-1.316... -0.707...]
[ 0.209... -0.707...]
[ 1.106... 1.414...]]
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.preprocessing.PowerTransformer.fit "sklearn.preprocessing.PowerTransformer.fit")(X[, y]) | Estimate the optimal parameter lambda for each feature. |
| [`fit_transform`](#sklearn.preprocessing.PowerTransformer.fit_transform "sklearn.preprocessing.PowerTransformer.fit_transform")(X[, y]) | Fit `PowerTransformer` to `X`, then transform `X`. |
| [`get_feature_names_out`](#sklearn.preprocessing.PowerTransformer.get_feature_names_out "sklearn.preprocessing.PowerTransformer.get_feature_names_out")([input\_features]) | Get output feature names for transformation. |
| [`get_params`](#sklearn.preprocessing.PowerTransformer.get_params "sklearn.preprocessing.PowerTransformer.get_params")([deep]) | Get parameters for this estimator. |
| [`inverse_transform`](#sklearn.preprocessing.PowerTransformer.inverse_transform "sklearn.preprocessing.PowerTransformer.inverse_transform")(X) | Apply the inverse power transformation using the fitted lambdas. |
| [`set_params`](#sklearn.preprocessing.PowerTransformer.set_params "sklearn.preprocessing.PowerTransformer.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.preprocessing.PowerTransformer.transform "sklearn.preprocessing.PowerTransformer.transform")(X) | Apply the power transform to each feature using the fitted lambdas. |
fit(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_data.py#L3019)
Estimate the optimal parameter lambda for each feature.
The optimal lambda parameter for minimizing skewness is estimated on each feature independently using maximum likelihood.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The data used to estimate the optimal transformation parameters.
**y**None
Ignored.
Returns:
**self**object
Fitted transformer.
fit\_transform(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_data.py#L3041)
Fit `PowerTransformer` to `X`, then transform `X`.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The data used to estimate the optimal transformation parameters and to be transformed using a power transformation.
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**X\_new**ndarray of shape (n\_samples, n\_features)
Transformed data.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L880)
Get output feature names for transformation.
Parameters:
**input\_features**array-like of str or None, default=None
Input features.
* If `input_features` is `None`, then `feature_names_in_` is used as feature names in. If `feature_names_in_` is not defined, then the following input feature names are generated: `["x0", "x1", ..., "x(n_features_in_ - 1)"]`.
* If `input_features` is an array-like, then `input_features` must match `feature_names_in_` if `feature_names_in_` is defined.
Returns:
**feature\_names\_out**ndarray of str objects
Same as input features.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
inverse\_transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_data.py#L3120)
Apply the inverse power transformation using the fitted lambdas.
The inverse of the Box-Cox transformation is given by:
```
if lambda_ == 0:
X = exp(X_trans)
else:
X = (X_trans * lambda_ + 1) ** (1 / lambda_)
```
The inverse of the Yeo-Johnson transformation is given by:
```
if X >= 0 and lambda_ == 0:
X = exp(X_trans) - 1
elif X >= 0 and lambda_ != 0:
X = (X_trans * lambda_ + 1) ** (1 / lambda_) - 1
elif X < 0 and lambda_ != 2:
X = 1 - (-(2 - lambda_) * X_trans + 1) ** (1 / (2 - lambda_))
elif X < 0 and lambda_ == 2:
X = 1 - exp(-X_trans)
```
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The transformed data.
Returns:
**X**ndarray of shape (n\_samples, n\_features)
The original data.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_data.py#L3091)
Apply the power transform to each feature using the fitted lambdas.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The data to be transformed using a power transformation.
Returns:
**X\_trans**ndarray of shape (n\_samples, n\_features)
The transformed data.
Examples using `sklearn.preprocessing.PowerTransformer`
-------------------------------------------------------
[Compare the effect of different scalers on data with outliers](../../auto_examples/preprocessing/plot_all_scaling#sphx-glr-auto-examples-preprocessing-plot-all-scaling-py)
[Map data to a normal distribution](../../auto_examples/preprocessing/plot_map_data_to_normal#sphx-glr-auto-examples-preprocessing-plot-map-data-to-normal-py)
scikit_learn sklearn.cluster.estimate_bandwidth sklearn.cluster.estimate\_bandwidth
===================================
sklearn.cluster.estimate\_bandwidth(*X*, *\**, *quantile=0.3*, *n\_samples=None*, *random\_state=0*, *n\_jobs=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_mean_shift.py#L31)
Estimate the bandwidth to use with the mean-shift algorithm.
That this function takes time at least quadratic in n\_samples. For large datasets, it’s wise to set that parameter to a small value.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input points.
**quantile**float, default=0.3
Should be between [0, 1] 0.5 means that the median of all pairwise distances is used.
**n\_samples**int, default=None
The number of samples to use. If not given, all samples are used.
**random\_state**int, RandomState instance, default=None
The generator used to randomly select the samples from input points for bandwidth estimation. Use an int to make the randomness deterministic. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**n\_jobs**int, default=None
The number of parallel jobs to run for neighbors search. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
Returns:
**bandwidth**float
The bandwidth parameter.
Examples using `sklearn.cluster.estimate_bandwidth`
---------------------------------------------------
[A demo of the mean-shift clustering algorithm](../../auto_examples/cluster/plot_mean_shift#sphx-glr-auto-examples-cluster-plot-mean-shift-py)
[Comparing different clustering algorithms on toy datasets](../../auto_examples/cluster/plot_cluster_comparison#sphx-glr-auto-examples-cluster-plot-cluster-comparison-py)
scikit_learn sklearn.svm.l1_min_c sklearn.svm.l1\_min\_c
======================
sklearn.svm.l1\_min\_c(*X*, *y*, *\**, *loss='squared\_hinge'*, *fit\_intercept=True*, *intercept\_scaling=1.0*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/svm/_bounds.py#L12)
Return the lowest bound for C such that for C in (l1\_min\_C, infinity) the model is guaranteed not to be empty. This applies to l1 penalized classifiers, such as LinearSVC with penalty=’l1’ and linear\_model.LogisticRegression with penalty=’l1’.
This value is valid if class\_weight parameter in fit() is not set.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Training vector, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**array-like of shape (n\_samples,)
Target vector relative to X.
**loss**{‘squared\_hinge’, ‘log’}, default=’squared\_hinge’
Specifies the loss function. With ‘squared\_hinge’ it is the squared hinge loss (a.k.a. L2 loss). With ‘log’ it is the loss of logistic regression models.
**fit\_intercept**bool, default=True
Specifies if the intercept should be fitted by the model. It must match the fit() method parameter.
**intercept\_scaling**float, default=1.0
when fit\_intercept is True, instance vector x becomes [x, intercept\_scaling], i.e. a “synthetic” feature with constant value equals to intercept\_scaling is appended to the instance vector. It must match the fit() method parameter.
Returns:
**l1\_min\_c**float
minimum value for C
Examples using `sklearn.svm.l1_min_c`
-------------------------------------
[Regularization path of L1- Logistic Regression](../../auto_examples/linear_model/plot_logistic_path#sphx-glr-auto-examples-linear-model-plot-logistic-path-py)
scikit_learn sklearn.model_selection.permutation_test_score sklearn.model\_selection.permutation\_test\_score
=================================================
sklearn.model\_selection.permutation\_test\_score(*estimator*, *X*, *y*, *\**, *groups=None*, *cv=None*, *n\_permutations=100*, *n\_jobs=None*, *random\_state=0*, *verbose=0*, *scoring=None*, *fit\_params=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_validation.py#L1169)
Evaluate the significance of a cross-validated score with permutations.
Permutes targets to generate ‘randomized data’ and compute the empirical p-value against the null hypothesis that features and targets are independent.
The p-value represents the fraction of randomized data sets where the estimator performed as well or better than in the original data. A small p-value suggests that there is a real dependency between features and targets which has been used by the estimator to give good predictions. A large p-value may be due to lack of real dependency between features and targets or the estimator was not able to use the dependency to give good predictions.
Read more in the [User Guide](../cross_validation#permutation-test-score).
Parameters:
**estimator**estimator object implementing ‘fit’
The object to use to fit the data.
**X**array-like of shape at least 2D
The data to fit.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs) or None
The target variable to try to predict in the case of supervised learning.
**groups**array-like of shape (n\_samples,), default=None
Labels to constrain permutation within groups, i.e. `y` values are permuted among samples with the same group identifier. When not specified, `y` values are permuted among all samples.
When a grouped cross-validator is used, the group labels are also passed on to the `split` method of the cross-validator. The cross-validator uses them for grouping the samples while splitting the dataset into train/test set.
**cv**int, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are:
* `None`, to use the default 5-fold cross validation,
* int, to specify the number of folds in a `(Stratified)KFold`,
* [CV splitter](https://scikit-learn.org/1.1/glossary.html#term-CV-splitter),
* An iterable yielding (train, test) splits as arrays of indices.
For `int`/`None` inputs, if the estimator is a classifier and `y` is either binary or multiclass, [`StratifiedKFold`](sklearn.model_selection.stratifiedkfold#sklearn.model_selection.StratifiedKFold "sklearn.model_selection.StratifiedKFold") is used. In all other cases, [`KFold`](sklearn.model_selection.kfold#sklearn.model_selection.KFold "sklearn.model_selection.KFold") is used. These splitters are instantiated with `shuffle=False` so the splits will be the same across calls.
Refer [User Guide](../cross_validation#cross-validation) for the various cross-validation strategies that can be used here.
Changed in version 0.22: `cv` default value if `None` changed from 3-fold to 5-fold.
**n\_permutations**int, default=100
Number of times to permute `y`.
**n\_jobs**int, default=None
Number of jobs to run in parallel. Training the estimator and computing the cross-validated score are parallelized over the permutations. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
**random\_state**int, RandomState instance or None, default=0
Pass an int for reproducible output for permutation of `y` values among samples. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**verbose**int, default=0
The verbosity level.
**scoring**str or callable, default=None
A single str (see [The scoring parameter: defining model evaluation rules](../model_evaluation#scoring-parameter)) or a callable (see [Defining your scoring strategy from metric functions](../model_evaluation#scoring)) to evaluate the predictions on the test set.
If `None` the estimator’s score method is used.
**fit\_params**dict, default=None
Parameters to pass to the fit method of the estimator.
New in version 0.24.
Returns:
**score**float
The true score without permuting targets.
**permutation\_scores**array of shape (n\_permutations,)
The scores obtained for each permutations.
**pvalue**float
The p-value, which approximates the probability that the score would be obtained by chance. This is calculated as:
`(C + 1) / (n_permutations + 1)`
Where C is the number of permutations whose score >= the true score.
The best possible p-value is 1/(n\_permutations + 1), the worst is 1.0.
#### Notes
This function implements Test 1 in:
Ojala and Garriga. [Permutation Tests for Studying Classifier Performance](http://www.jmlr.org/papers/volume11/ojala10a/ojala10a.pdf). The Journal of Machine Learning Research (2010) vol. 11
Examples using `sklearn.model_selection.permutation_test_score`
---------------------------------------------------------------
[Test with permutations the significance of a classification score](../../auto_examples/model_selection/plot_permutation_tests_for_classification#sphx-glr-auto-examples-model-selection-plot-permutation-tests-for-classification-py)
| programming_docs |
scikit_learn sklearn.svm.OneClassSVM sklearn.svm.OneClassSVM
=======================
*class*sklearn.svm.OneClassSVM(*\**, *kernel='rbf'*, *degree=3*, *gamma='scale'*, *coef0=0.0*, *tol=0.001*, *nu=0.5*, *shrinking=True*, *cache\_size=200*, *verbose=False*, *max\_iter=-1*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/svm/_classes.py#L1453)
Unsupervised Outlier Detection.
Estimate the support of a high-dimensional distribution.
The implementation is based on libsvm.
Read more in the [User Guide](../outlier_detection#outlier-detection).
Parameters:
**kernel**{‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’} or callable, default=’rbf’
Specifies the kernel type to be used in the algorithm. If none is given, ‘rbf’ will be used. If a callable is given it is used to precompute the kernel matrix.
**degree**int, default=3
Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels.
**gamma**{‘scale’, ‘auto’} or float, default=’scale’
Kernel coefficient for ‘rbf’, ‘poly’ and ‘sigmoid’.
* if `gamma='scale'` (default) is passed then it uses 1 / (n\_features \* X.var()) as value of gamma,
* if ‘auto’, uses 1 / n\_features.
Changed in version 0.22: The default value of `gamma` changed from ‘auto’ to ‘scale’.
**coef0**float, default=0.0
Independent term in kernel function. It is only significant in ‘poly’ and ‘sigmoid’.
**tol**float, default=1e-3
Tolerance for stopping criterion.
**nu**float, default=0.5
An upper bound on the fraction of training errors and a lower bound of the fraction of support vectors. Should be in the interval (0, 1]. By default 0.5 will be taken.
**shrinking**bool, default=True
Whether to use the shrinking heuristic. See the [User Guide](../svm#shrinking-svm).
**cache\_size**float, default=200
Specify the size of the kernel cache (in MB).
**verbose**bool, default=False
Enable verbose output. Note that this setting takes advantage of a per-process runtime setting in libsvm that, if enabled, may not work properly in a multithreaded context.
**max\_iter**int, default=-1
Hard limit on iterations within solver, or -1 for no limit.
Attributes:
**class\_weight\_**ndarray of shape (n\_classes,)
Multipliers of parameter C for each class. Computed based on the `class_weight` parameter.
[`coef_`](#sklearn.svm.OneClassSVM.coef_ "sklearn.svm.OneClassSVM.coef_")ndarray of shape (1, n\_features)
Weights assigned to the features when `kernel="linear"`.
**dual\_coef\_**ndarray of shape (1, n\_SV)
Coefficients of the support vectors in the decision function.
**fit\_status\_**int
0 if correctly fitted, 1 otherwise (will raise warning)
**intercept\_**ndarray of shape (1,)
Constant in the decision function.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
**n\_iter\_**int
Number of iterations run by the optimization routine to fit the model.
New in version 1.1.
[`n_support_`](#sklearn.svm.OneClassSVM.n_support_ "sklearn.svm.OneClassSVM.n_support_")ndarray of shape (n\_classes,), dtype=int32
Number of support vectors for each class.
**offset\_**float
Offset used to define the decision function from the raw scores. We have the relation: decision\_function = score\_samples - `offset_`. The offset is the opposite of `intercept_` and is provided for consistency with other outlier detection algorithms.
New in version 0.20.
**shape\_fit\_**tuple of int of shape (n\_dimensions\_of\_X,)
Array dimensions of training vector `X`.
**support\_**ndarray of shape (n\_SV,)
Indices of support vectors.
**support\_vectors\_**ndarray of shape (n\_SV, n\_features)
Support vectors.
See also
[`sklearn.linear_model.SGDOneClassSVM`](sklearn.linear_model.sgdoneclasssvm#sklearn.linear_model.SGDOneClassSVM "sklearn.linear_model.SGDOneClassSVM")
Solves linear One-Class SVM using Stochastic Gradient Descent.
[`sklearn.neighbors.LocalOutlierFactor`](sklearn.neighbors.localoutlierfactor#sklearn.neighbors.LocalOutlierFactor "sklearn.neighbors.LocalOutlierFactor")
Unsupervised Outlier Detection using Local Outlier Factor (LOF).
[`sklearn.ensemble.IsolationForest`](sklearn.ensemble.isolationforest#sklearn.ensemble.IsolationForest "sklearn.ensemble.IsolationForest")
Isolation Forest Algorithm.
#### Examples
```
>>> from sklearn.svm import OneClassSVM
>>> X = [[0], [0.44], [0.45], [0.46], [1]]
>>> clf = OneClassSVM(gamma='auto').fit(X)
>>> clf.predict(X)
array([-1, 1, 1, 1, -1])
>>> clf.score_samples(X)
array([1.7798..., 2.0547..., 2.0556..., 2.0561..., 1.7332...])
```
#### Methods
| | |
| --- | --- |
| [`decision_function`](#sklearn.svm.OneClassSVM.decision_function "sklearn.svm.OneClassSVM.decision_function")(X) | Signed distance to the separating hyperplane. |
| [`fit`](#sklearn.svm.OneClassSVM.fit "sklearn.svm.OneClassSVM.fit")(X[, y, sample\_weight]) | Detect the soft boundary of the set of samples X. |
| [`fit_predict`](#sklearn.svm.OneClassSVM.fit_predict "sklearn.svm.OneClassSVM.fit_predict")(X[, y]) | Perform fit on X and returns labels for X. |
| [`get_params`](#sklearn.svm.OneClassSVM.get_params "sklearn.svm.OneClassSVM.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.svm.OneClassSVM.predict "sklearn.svm.OneClassSVM.predict")(X) | Perform classification on samples in X. |
| [`score_samples`](#sklearn.svm.OneClassSVM.score_samples "sklearn.svm.OneClassSVM.score_samples")(X) | Raw scoring function of the samples. |
| [`set_params`](#sklearn.svm.OneClassSVM.set_params "sklearn.svm.OneClassSVM.set_params")(\*\*params) | Set the parameters of this estimator. |
*property*coef\_
Weights assigned to the features when `kernel="linear"`.
Returns:
ndarray of shape (n\_features, n\_classes)
decision\_function(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/svm/_classes.py#L1670)
Signed distance to the separating hyperplane.
Signed distance is positive for an inlier and negative for an outlier.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The data matrix.
Returns:
**dec**ndarray of shape (n\_samples,)
Returns the decision function of the samples.
fit(*X*, *y=None*, *sample\_weight=None*, *\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/svm/_classes.py#L1624)
Detect the soft boundary of the set of samples X.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Set of samples, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**Ignored
Not used, present for API consistency by convention.
**sample\_weight**array-like of shape (n\_samples,), default=None
Per-sample weights. Rescale C per sample. Higher weights force the classifier to put more emphasis on these points.
**\*\*params**dict
Additional fit parameters.
Deprecated since version 1.0: The `fit` method will not longer accept extra keyword parameters in 1.2. These keyword parameters were already discarded.
Returns:
**self**object
Fitted estimator.
#### Notes
If X is not a C-ordered contiguous array it is copied.
fit\_predict(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L956)
Perform fit on X and returns labels for X.
Returns -1 for outliers and 1 for inliers.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The input samples.
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**y**ndarray of shape (n\_samples,)
1 for inliers, -1 for outliers.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
*property*n\_support\_
Number of support vectors for each class.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/svm/_classes.py#L1703)
Perform classification on samples in X.
For a one-class model, +1 or -1 is returned.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features) or (n\_samples\_test, n\_samples\_train)
For kernel=”precomputed”, the expected shape of X is (n\_samples\_test, n\_samples\_train).
Returns:
**y\_pred**ndarray of shape (n\_samples,)
Class labels for samples in X.
score\_samples(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/svm/_classes.py#L1688)
Raw scoring function of the samples.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The data matrix.
Returns:
**score\_samples**ndarray of shape (n\_samples,)
Returns the (unshifted) scoring function of the samples.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.svm.OneClassSVM`
----------------------------------------
[Libsvm GUI](../../auto_examples/applications/svm_gui#sphx-glr-auto-examples-applications-svm-gui-py)
[Outlier detection on a real data set](../../auto_examples/applications/plot_outlier_detection_wine#sphx-glr-auto-examples-applications-plot-outlier-detection-wine-py)
[Species distribution modeling](../../auto_examples/applications/plot_species_distribution_modeling#sphx-glr-auto-examples-applications-plot-species-distribution-modeling-py)
[One-Class SVM versus One-Class SVM using Stochastic Gradient Descent](../../auto_examples/linear_model/plot_sgdocsvm_vs_ocsvm#sphx-glr-auto-examples-linear-model-plot-sgdocsvm-vs-ocsvm-py)
[Comparing anomaly detection algorithms for outlier detection on toy datasets](../../auto_examples/miscellaneous/plot_anomaly_comparison#sphx-glr-auto-examples-miscellaneous-plot-anomaly-comparison-py)
[One-class SVM with non-linear kernel (RBF)](../../auto_examples/svm/plot_oneclass#sphx-glr-auto-examples-svm-plot-oneclass-py)
scikit_learn sklearn.covariance.ShrunkCovariance sklearn.covariance.ShrunkCovariance
===================================
*class*sklearn.covariance.ShrunkCovariance(*\**, *store\_precision=True*, *assume\_centered=False*, *shrinkage=0.1*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_shrunk_covariance.py#L64)
Covariance estimator with shrinkage.
Read more in the [User Guide](../covariance#shrunk-covariance).
Parameters:
**store\_precision**bool, default=True
Specify if the estimated precision is stored.
**assume\_centered**bool, default=False
If True, data will not be centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False, data will be centered before computation.
**shrinkage**float, default=0.1
Coefficient in the convex combination used for the computation of the shrunk estimate. Range is [0, 1].
Attributes:
**covariance\_**ndarray of shape (n\_features, n\_features)
Estimated covariance matrix
**location\_**ndarray of shape (n\_features,)
Estimated location, i.e. the estimated mean.
**precision\_**ndarray of shape (n\_features, n\_features)
Estimated pseudo inverse matrix. (stored only if store\_precision is True)
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`EllipticEnvelope`](sklearn.covariance.ellipticenvelope#sklearn.covariance.EllipticEnvelope "sklearn.covariance.EllipticEnvelope")
An object for detecting outliers in a Gaussian distributed dataset.
[`EmpiricalCovariance`](sklearn.covariance.empiricalcovariance#sklearn.covariance.EmpiricalCovariance "sklearn.covariance.EmpiricalCovariance")
Maximum likelihood covariance estimator.
[`GraphicalLasso`](sklearn.covariance.graphicallasso#sklearn.covariance.GraphicalLasso "sklearn.covariance.GraphicalLasso")
Sparse inverse covariance estimation with an l1-penalized estimator.
[`GraphicalLassoCV`](sklearn.covariance.graphicallassocv#sklearn.covariance.GraphicalLassoCV "sklearn.covariance.GraphicalLassoCV")
Sparse inverse covariance with cross-validated choice of the l1 penalty.
[`LedoitWolf`](sklearn.covariance.ledoitwolf#sklearn.covariance.LedoitWolf "sklearn.covariance.LedoitWolf")
LedoitWolf Estimator.
[`MinCovDet`](sklearn.covariance.mincovdet#sklearn.covariance.MinCovDet "sklearn.covariance.MinCovDet")
Minimum Covariance Determinant (robust estimator of covariance).
[`OAS`](sklearn.covariance.oas#sklearn.covariance.OAS "sklearn.covariance.OAS")
Oracle Approximating Shrinkage Estimator.
#### Notes
The regularized covariance is given by:
(1 - shrinkage) \* cov + shrinkage \* mu \* np.identity(n\_features)
where mu = trace(cov) / n\_features
#### Examples
```
>>> import numpy as np
>>> from sklearn.covariance import ShrunkCovariance
>>> from sklearn.datasets import make_gaussian_quantiles
>>> real_cov = np.array([[.8, .3],
... [.3, .4]])
>>> rng = np.random.RandomState(0)
>>> X = rng.multivariate_normal(mean=[0, 0],
... cov=real_cov,
... size=500)
>>> cov = ShrunkCovariance().fit(X)
>>> cov.covariance_
array([[0.7387..., 0.2536...],
[0.2536..., 0.4110...]])
>>> cov.location_
array([0.0622..., 0.0193...])
```
#### Methods
| | |
| --- | --- |
| [`error_norm`](#sklearn.covariance.ShrunkCovariance.error_norm "sklearn.covariance.ShrunkCovariance.error_norm")(comp\_cov[, norm, scaling, squared]) | Compute the Mean Squared Error between two covariance estimators. |
| [`fit`](#sklearn.covariance.ShrunkCovariance.fit "sklearn.covariance.ShrunkCovariance.fit")(X[, y]) | Fit the shrunk covariance model to X. |
| [`get_params`](#sklearn.covariance.ShrunkCovariance.get_params "sklearn.covariance.ShrunkCovariance.get_params")([deep]) | Get parameters for this estimator. |
| [`get_precision`](#sklearn.covariance.ShrunkCovariance.get_precision "sklearn.covariance.ShrunkCovariance.get_precision")() | Getter for the precision matrix. |
| [`mahalanobis`](#sklearn.covariance.ShrunkCovariance.mahalanobis "sklearn.covariance.ShrunkCovariance.mahalanobis")(X) | Compute the squared Mahalanobis distances of given observations. |
| [`score`](#sklearn.covariance.ShrunkCovariance.score "sklearn.covariance.ShrunkCovariance.score")(X\_test[, y]) | Compute the log-likelihood of `X_test` under the estimated Gaussian model. |
| [`set_params`](#sklearn.covariance.ShrunkCovariance.set_params "sklearn.covariance.ShrunkCovariance.set_params")(\*\*params) | Set the parameters of this estimator. |
error\_norm(*comp\_cov*, *norm='frobenius'*, *scaling=True*, *squared=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_empirical_covariance.py#L267)
Compute the Mean Squared Error between two covariance estimators.
Parameters:
**comp\_cov**array-like of shape (n\_features, n\_features)
The covariance to compare with.
**norm**{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error `(comp_cov - self.covariance_)`.
**scaling**bool, default=True
If True (default), the squared error norm is divided by n\_features. If False, the squared error norm is not rescaled.
**squared**bool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned.
Returns:
**result**float
The Mean Squared Error (in the sense of the Frobenius norm) between `self` and `comp_cov` covariance estimators.
fit(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_shrunk_covariance.py#L154)
Fit the shrunk covariance model to X.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training data, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**self**object
Returns the instance itself.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
get\_precision()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_empirical_covariance.py#L195)
Getter for the precision matrix.
Returns:
**precision\_**array-like of shape (n\_features, n\_features)
The precision matrix associated to the current covariance object.
mahalanobis(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_empirical_covariance.py#L318)
Compute the squared Mahalanobis distances of given observations.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit.
Returns:
**dist**ndarray of shape (n\_samples,)
Squared Mahalanobis distances of the observations.
score(*X\_test*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_empirical_covariance.py#L236)
Compute the log-likelihood of `X_test` under the estimated Gaussian model.
The Gaussian model is defined by its mean and covariance matrix which are represented respectively by `self.location_` and `self.covariance_`.
Parameters:
**X\_test**array-like of shape (n\_samples, n\_features)
Test data of which we compute the likelihood, where `n_samples` is the number of samples and `n_features` is the number of features. `X_test` is assumed to be drawn from the same distribution than the data used in fit (including centering).
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**res**float
The log-likelihood of `X_test` with `self.location_` and `self.covariance_` as estimators of the Gaussian model mean and covariance matrix respectively.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.covariance.ShrunkCovariance`
----------------------------------------------------
[Shrinkage covariance estimation: LedoitWolf vs OAS and max-likelihood](../../auto_examples/covariance/plot_covariance_estimation#sphx-glr-auto-examples-covariance-plot-covariance-estimation-py)
[Model selection with Probabilistic PCA and Factor Analysis (FA)](../../auto_examples/decomposition/plot_pca_vs_fa_model_selection#sphx-glr-auto-examples-decomposition-plot-pca-vs-fa-model-selection-py)
| programming_docs |
scikit_learn sklearn.neighbors.KNeighborsRegressor sklearn.neighbors.KNeighborsRegressor
=====================================
*class*sklearn.neighbors.KNeighborsRegressor(*n\_neighbors=5*, *\**, *weights='uniform'*, *algorithm='auto'*, *leaf\_size=30*, *p=2*, *metric='minkowski'*, *metric\_params=None*, *n\_jobs=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_regression.py#L22)
Regression based on k-nearest neighbors.
The target is predicted by local interpolation of the targets associated of the nearest neighbors in the training set.
Read more in the [User Guide](../neighbors#regression).
New in version 0.9.
Parameters:
**n\_neighbors**int, default=5
Number of neighbors to use by default for [`kneighbors`](#sklearn.neighbors.KNeighborsRegressor.kneighbors "sklearn.neighbors.KNeighborsRegressor.kneighbors") queries.
**weights**{‘uniform’, ‘distance’} or callable, default=’uniform’
Weight function used in prediction. Possible values:
* ‘uniform’ : uniform weights. All points in each neighborhood are weighted equally.
* ‘distance’ : weight points by the inverse of their distance. in this case, closer neighbors of a query point will have a greater influence than neighbors which are further away.
* [callable] : a user-defined function which accepts an array of distances, and returns an array of the same shape containing the weights.
Uniform weights are used by default.
**algorithm**{‘auto’, ‘ball\_tree’, ‘kd\_tree’, ‘brute’}, default=’auto’
Algorithm used to compute the nearest neighbors:
* ‘ball\_tree’ will use [`BallTree`](sklearn.neighbors.balltree#sklearn.neighbors.BallTree "sklearn.neighbors.BallTree")
* ‘kd\_tree’ will use [`KDTree`](sklearn.neighbors.kdtree#sklearn.neighbors.KDTree "sklearn.neighbors.KDTree")
* ‘brute’ will use a brute-force search.
* ‘auto’ will attempt to decide the most appropriate algorithm based on the values passed to [`fit`](#sklearn.neighbors.KNeighborsRegressor.fit "sklearn.neighbors.KNeighborsRegressor.fit") method.
Note: fitting on sparse input will override the setting of this parameter, using brute force.
**leaf\_size**int, default=30
Leaf size passed to BallTree or KDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem.
**p**int, default=2
Power parameter for the Minkowski metric. When p = 1, this is equivalent to using manhattan\_distance (l1), and euclidean\_distance (l2) for p = 2. For arbitrary p, minkowski\_distance (l\_p) is used.
**metric**str or callable, default=’minkowski’
Metric to use for distance computation. Default is “minkowski”, which results in the standard Euclidean distance when p = 2. See the documentation of [scipy.spatial.distance](https://docs.scipy.org/doc/scipy/reference/spatial.distance.html) and the metrics listed in [`distance_metrics`](sklearn.metrics.pairwise.distance_metrics#sklearn.metrics.pairwise.distance_metrics "sklearn.metrics.pairwise.distance_metrics") for valid metric values.
If metric is “precomputed”, X is assumed to be a distance matrix and must be square during fit. X may be a [sparse graph](https://scikit-learn.org/1.1/glossary.html#term-sparse-graph), in which case only “nonzero” elements may be considered neighbors.
If metric is a callable function, it takes two arrays representing 1D vectors as inputs and must return one value indicating the distance between those vectors. This works for Scipy’s metrics, but is less efficient than passing the metric name as a string.
**metric\_params**dict, default=None
Additional keyword arguments for the metric function.
**n\_jobs**int, default=None
The number of parallel jobs to run for neighbors search. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details. Doesn’t affect [`fit`](#sklearn.neighbors.KNeighborsRegressor.fit "sklearn.neighbors.KNeighborsRegressor.fit") method.
Attributes:
**effective\_metric\_**str or callable
The distance metric to use. It will be same as the `metric` parameter or a synonym of it, e.g. ‘euclidean’ if the `metric` parameter set to ‘minkowski’ and `p` parameter set to 2.
**effective\_metric\_params\_**dict
Additional keyword arguments for the metric function. For most metrics will be same with `metric_params` parameter, but may also contain the `p` parameter value if the `effective_metric_` attribute is set to ‘minkowski’.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
**n\_samples\_fit\_**int
Number of samples in the fitted data.
See also
[`NearestNeighbors`](sklearn.neighbors.nearestneighbors#sklearn.neighbors.NearestNeighbors "sklearn.neighbors.NearestNeighbors")
Unsupervised learner for implementing neighbor searches.
[`RadiusNeighborsRegressor`](sklearn.neighbors.radiusneighborsregressor#sklearn.neighbors.RadiusNeighborsRegressor "sklearn.neighbors.RadiusNeighborsRegressor")
Regression based on neighbors within a fixed radius.
[`KNeighborsClassifier`](sklearn.neighbors.kneighborsclassifier#sklearn.neighbors.KNeighborsClassifier "sklearn.neighbors.KNeighborsClassifier")
Classifier implementing the k-nearest neighbors vote.
[`RadiusNeighborsClassifier`](sklearn.neighbors.radiusneighborsclassifier#sklearn.neighbors.RadiusNeighborsClassifier "sklearn.neighbors.RadiusNeighborsClassifier")
Classifier implementing a vote among neighbors within a given radius.
#### Notes
See [Nearest Neighbors](../neighbors#neighbors) in the online documentation for a discussion of the choice of `algorithm` and `leaf_size`.
Warning
Regarding the Nearest Neighbors algorithms, if it is found that two neighbors, neighbor `k+1` and `k`, have identical distances but different labels, the results will depend on the ordering of the training data.
<https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm>
#### Examples
```
>>> X = [[0], [1], [2], [3]]
>>> y = [0, 0, 1, 1]
>>> from sklearn.neighbors import KNeighborsRegressor
>>> neigh = KNeighborsRegressor(n_neighbors=2)
>>> neigh.fit(X, y)
KNeighborsRegressor(...)
>>> print(neigh.predict([[1.5]]))
[0.5]
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.neighbors.KNeighborsRegressor.fit "sklearn.neighbors.KNeighborsRegressor.fit")(X, y) | Fit the k-nearest neighbors regressor from the training dataset. |
| [`get_params`](#sklearn.neighbors.KNeighborsRegressor.get_params "sklearn.neighbors.KNeighborsRegressor.get_params")([deep]) | Get parameters for this estimator. |
| [`kneighbors`](#sklearn.neighbors.KNeighborsRegressor.kneighbors "sklearn.neighbors.KNeighborsRegressor.kneighbors")([X, n\_neighbors, return\_distance]) | Find the K-neighbors of a point. |
| [`kneighbors_graph`](#sklearn.neighbors.KNeighborsRegressor.kneighbors_graph "sklearn.neighbors.KNeighborsRegressor.kneighbors_graph")([X, n\_neighbors, mode]) | Compute the (weighted) graph of k-Neighbors for points in X. |
| [`predict`](#sklearn.neighbors.KNeighborsRegressor.predict "sklearn.neighbors.KNeighborsRegressor.predict")(X) | Predict the target for the provided data. |
| [`score`](#sklearn.neighbors.KNeighborsRegressor.score "sklearn.neighbors.KNeighborsRegressor.score")(X, y[, sample\_weight]) | Return the coefficient of determination of the prediction. |
| [`set_params`](#sklearn.neighbors.KNeighborsRegressor.set_params "sklearn.neighbors.KNeighborsRegressor.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_regression.py#L190)
Fit the k-nearest neighbors regressor from the training dataset.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features) or (n\_samples, n\_samples) if metric=’precomputed’
Training data.
**y**{array-like, sparse matrix} of shape (n\_samples,) or (n\_samples, n\_outputs)
Target values.
Returns:
**self**KNeighborsRegressor
The fitted k-nearest neighbors regressor.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
kneighbors(*X=None*, *n\_neighbors=None*, *return\_distance=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_base.py#L670)
Find the K-neighbors of a point.
Returns indices of and distances to the neighbors of each point.
Parameters:
**X**array-like, shape (n\_queries, n\_features), or (n\_queries, n\_indexed) if metric == ‘precomputed’, default=None
The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor.
**n\_neighbors**int, default=None
Number of neighbors required for each sample. The default is the value passed to the constructor.
**return\_distance**bool, default=True
Whether or not to return the distances.
Returns:
**neigh\_dist**ndarray of shape (n\_queries, n\_neighbors)
Array representing the lengths to points, only present if return\_distance=True.
**neigh\_ind**ndarray of shape (n\_queries, n\_neighbors)
Indices of the nearest points in the population matrix.
#### Examples
In the following example, we construct a NearestNeighbors class from an array representing our data set and ask who’s the closest point to [1,1,1]
```
>>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(n_neighbors=1)
>>> neigh.fit(samples)
NearestNeighbors(n_neighbors=1)
>>> print(neigh.kneighbors([[1., 1., 1.]]))
(array([[0.5]]), array([[2]]))
```
As you can see, it returns [[0.5]], and [[2]], which means that the element is at distance 0.5 and is the third element of samples (indexes start at 0). You can also query for multiple points:
```
>>> X = [[0., 1., 0.], [1., 0., 1.]]
>>> neigh.kneighbors(X, return_distance=False)
array([[1],
[2]]...)
```
kneighbors\_graph(*X=None*, *n\_neighbors=None*, *mode='connectivity'*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_base.py#L860)
Compute the (weighted) graph of k-Neighbors for points in X.
Parameters:
**X**array-like of shape (n\_queries, n\_features), or (n\_queries, n\_indexed) if metric == ‘precomputed’, default=None
The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. For `metric='precomputed'` the shape should be (n\_queries, n\_indexed). Otherwise the shape should be (n\_queries, n\_features).
**n\_neighbors**int, default=None
Number of neighbors for each sample. The default is the value passed to the constructor.
**mode**{‘connectivity’, ‘distance’}, default=’connectivity’
Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are distances between points, type of distance depends on the selected metric parameter in NearestNeighbors class.
Returns:
**A**sparse-matrix of shape (n\_queries, n\_samples\_fit)
`n_samples_fit` is the number of samples in the fitted data. `A[i, j]` gives the weight of the edge connecting `i` to `j`. The matrix is of CSR format.
See also
[`NearestNeighbors.radius_neighbors_graph`](sklearn.neighbors.nearestneighbors#sklearn.neighbors.NearestNeighbors.radius_neighbors_graph "sklearn.neighbors.NearestNeighbors.radius_neighbors_graph")
Compute the (weighted) graph of Neighbors for points in X.
#### Examples
```
>>> X = [[0], [3], [1]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(n_neighbors=2)
>>> neigh.fit(X)
NearestNeighbors(n_neighbors=2)
>>> A = neigh.kneighbors_graph(X)
>>> A.toarray()
array([[1., 0., 1.],
[0., 1., 1.],
[1., 0., 1.]])
```
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_regression.py#L212)
Predict the target for the provided data.
Parameters:
**X**array-like of shape (n\_queries, n\_features), or (n\_queries, n\_indexed) if metric == ‘precomputed’
Test samples.
Returns:
**y**ndarray of shape (n\_queries,) or (n\_queries, n\_outputs), dtype=int
Target values.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L677)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares `((y_true - y_pred)** 2).sum()` and \(v\) is the total sum of squares `((y_true - y_true.mean()) ** 2).sum()`. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of `y`, disregarding the input features, would get a \(R^2\) score of 0.0.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape `(n_samples, n_samples_fitted)`, where `n_samples_fitted` is the number of samples used in the fitting for the estimator.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True values for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
\(R^2\) of `self.predict(X)` wrt. `y`.
#### Notes
The \(R^2\) score used when calling `score` on a regressor uses `multioutput='uniform_average'` from version 0.23 to keep consistent with default value of [`r2_score`](sklearn.metrics.r2_score#sklearn.metrics.r2_score "sklearn.metrics.r2_score"). This influences the `score` method of all the multioutput regressors (except for [`MultiOutputRegressor`](sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor")).
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.neighbors.KNeighborsRegressor`
------------------------------------------------------
[Face completion with a multi-output estimators](../../auto_examples/miscellaneous/plot_multioutput_face_completion#sphx-glr-auto-examples-miscellaneous-plot-multioutput-face-completion-py)
[Imputing missing values with variants of IterativeImputer](../../auto_examples/impute/plot_iterative_imputer_variants_comparison#sphx-glr-auto-examples-impute-plot-iterative-imputer-variants-comparison-py)
[Nearest Neighbors regression](../../auto_examples/neighbors/plot_regression#sphx-glr-auto-examples-neighbors-plot-regression-py)
scikit_learn sklearn.pipeline.FeatureUnion sklearn.pipeline.FeatureUnion
=============================
*class*sklearn.pipeline.FeatureUnion(*transformer\_list*, *\**, *n\_jobs=None*, *transformer\_weights=None*, *verbose=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/pipeline.py#L887)
Concatenates results of multiple transformer objects.
This estimator applies a list of transformer objects in parallel to the input data, then concatenates the results. This is useful to combine several feature extraction mechanisms into a single transformer.
Parameters of the transformers may be set using its name and the parameter name separated by a ‘\_\_’. A transformer may be replaced entirely by setting the parameter with its name to another transformer, removed by setting to ‘drop’ or disabled by setting to ‘passthrough’ (features are passed without transformation).
Read more in the [User Guide](../compose#feature-union).
New in version 0.13.
Parameters:
**transformer\_list**list of (str, transformer) tuples
List of transformer objects to be applied to the data. The first half of each tuple is the name of the transformer. The transformer can be ‘drop’ for it to be ignored or can be ‘passthrough’ for features to be passed unchanged.
New in version 1.1: Added the option `"passthrough"`.
Changed in version 0.22: Deprecated `None` as a transformer in favor of ‘drop’.
**n\_jobs**int, default=None
Number of jobs to run in parallel. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
Changed in version v0.20: `n_jobs` default changed from 1 to None
**transformer\_weights**dict, default=None
Multiplicative weights for features per transformer. Keys are transformer names, values the weights. Raises ValueError if key not present in `transformer_list`.
**verbose**bool, default=False
If True, the time elapsed while fitting each transformer will be printed as it is completed.
Attributes:
[`n_features_in_`](#sklearn.pipeline.FeatureUnion.n_features_in_ "sklearn.pipeline.FeatureUnion.n_features_in_")int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
See also
[`make_union`](sklearn.pipeline.make_union#sklearn.pipeline.make_union "sklearn.pipeline.make_union")
Convenience function for simplified feature union construction.
#### Examples
```
>>> from sklearn.pipeline import FeatureUnion
>>> from sklearn.decomposition import PCA, TruncatedSVD
>>> union = FeatureUnion([("pca", PCA(n_components=1)),
... ("svd", TruncatedSVD(n_components=2))])
>>> X = [[0., 1., 3], [2., 2., 5]]
>>> union.fit_transform(X)
array([[ 1.5 , 3.0..., 0.8...],
[-1.5 , 5.7..., -0.4...]])
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.pipeline.FeatureUnion.fit "sklearn.pipeline.FeatureUnion.fit")(X[, y]) | Fit all transformers using X. |
| [`fit_transform`](#sklearn.pipeline.FeatureUnion.fit_transform "sklearn.pipeline.FeatureUnion.fit_transform")(X[, y]) | Fit all transformers, transform the data and concatenate results. |
| [`get_feature_names`](#sklearn.pipeline.FeatureUnion.get_feature_names "sklearn.pipeline.FeatureUnion.get_feature_names")() | DEPRECATED: get\_feature\_names is deprecated in 1.0 and will be removed in 1.2. |
| [`get_feature_names_out`](#sklearn.pipeline.FeatureUnion.get_feature_names_out "sklearn.pipeline.FeatureUnion.get_feature_names_out")([input\_features]) | Get output feature names for transformation. |
| [`get_params`](#sklearn.pipeline.FeatureUnion.get_params "sklearn.pipeline.FeatureUnion.get_params")([deep]) | Get parameters for this estimator. |
| [`set_params`](#sklearn.pipeline.FeatureUnion.set_params "sklearn.pipeline.FeatureUnion.set_params")(\*\*kwargs) | Set the parameters of this estimator. |
| [`transform`](#sklearn.pipeline.FeatureUnion.transform "sklearn.pipeline.FeatureUnion.transform")(X) | Transform X separately by each transformer, concatenate results. |
fit(*X*, *y=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/pipeline.py#L1106)
Fit all transformers using X.
Parameters:
**X**iterable or array-like, depending on transformers
Input data, used to fit transformers.
**y**array-like of shape (n\_samples, n\_outputs), default=None
Targets for supervised learning.
**\*\*fit\_params**dict, default=None
Parameters to pass to the fit method of the estimator.
Returns:
**self**object
FeatureUnion class instance.
fit\_transform(*X*, *y=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/pipeline.py#L1133)
Fit all transformers, transform the data and concatenate results.
Parameters:
**X**iterable or array-like, depending on transformers
Input data to be transformed.
**y**array-like of shape (n\_samples, n\_outputs), default=None
Targets for supervised learning.
**\*\*fit\_params**dict, default=None
Parameters to pass to the fit method of the estimator.
Returns:
**X\_t**array-like or sparse matrix of shape (n\_samples, sum\_n\_components)
The `hstack` of results of transformers. `sum_n_components` is the sum of `n_components` (output dimension) over transformers.
get\_feature\_names()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/pipeline.py#L1059)
DEPRECATED: get\_feature\_names is deprecated in 1.0 and will be removed in 1.2. Please use get\_feature\_names\_out instead.
Get feature names from all transformers.
Returns:
**feature\_names**list of strings
Names of the features produced by transform.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/pipeline.py#L1081)
Get output feature names for transformation.
Parameters:
**input\_features**array-like of str or None, default=None
Input features.
Returns:
**feature\_names\_out**ndarray of str objects
Transformed feature names.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/pipeline.py#L972)
Get parameters for this estimator.
Returns the parameters given in the constructor as well as the estimators contained within the `transformer_list` of the `FeatureUnion`.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**mapping of string to any
Parameter names mapped to their values.
*property*n\_features\_in\_
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
set\_params(*\*\*kwargs*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/pipeline.py#L992)
Set the parameters of this estimator.
Valid parameter keys can be listed with `get_params()`. Note that you can directly set the parameters of the estimators contained in `tranformer_list`.
Parameters:
**\*\*kwargs**dict
Parameters of this estimator or parameters of estimators contained in `transform_list`. Parameters of the transformers may be set using its name and the parameter name separated by a ‘\_\_’.
Returns:
**self**object
FeatureUnion class instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/pipeline.py#L1189)
Transform X separately by each transformer, concatenate results.
Parameters:
**X**iterable or array-like, depending on transformers
Input data to be transformed.
Returns:
**X\_t**array-like or sparse matrix of shape (n\_samples, sum\_n\_components)
The `hstack` of results of transformers. `sum_n_components` is the sum of `n_components` (output dimension) over transformers.
Examples using `sklearn.pipeline.FeatureUnion`
----------------------------------------------
[Time-related feature engineering](../../auto_examples/applications/plot_cyclical_feature_engineering#sphx-glr-auto-examples-applications-plot-cyclical-feature-engineering-py)
[Concatenating multiple feature extraction methods](../../auto_examples/compose/plot_feature_union#sphx-glr-auto-examples-compose-plot-feature-union-py)
| programming_docs |
scikit_learn sklearn.covariance.empirical_covariance sklearn.covariance.empirical\_covariance
========================================
sklearn.covariance.empirical\_covariance(*X*, *\**, *assume\_centered=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_empirical_covariance.py#L51)
Compute the Maximum likelihood covariance estimator.
Parameters:
**X**ndarray of shape (n\_samples, n\_features)
Data from which to compute the covariance estimate.
**assume\_centered**bool, default=False
If `True`, data will not be centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If `False`, data will be centered before computation.
Returns:
**covariance**ndarray of shape (n\_features, n\_features)
Empirical covariance (Maximum Likelihood Estimator).
#### Examples
```
>>> from sklearn.covariance import empirical_covariance
>>> X = [[1,1,1],[1,1,1],[1,1,1],
... [0,0,0],[0,0,0],[0,0,0]]
>>> empirical_covariance(X)
array([[0.25, 0.25, 0.25],
[0.25, 0.25, 0.25],
[0.25, 0.25, 0.25]])
```
Examples using `sklearn.covariance.empirical_covariance`
--------------------------------------------------------
[Shrinkage covariance estimation: LedoitWolf vs OAS and max-likelihood](../../auto_examples/covariance/plot_covariance_estimation#sphx-glr-auto-examples-covariance-plot-covariance-estimation-py)
scikit_learn sklearn.metrics.silhouette_score sklearn.metrics.silhouette\_score
=================================
sklearn.metrics.silhouette\_score(*X*, *labels*, *\**, *metric='euclidean'*, *sample\_size=None*, *random\_state=None*, *\*\*kwds*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/cluster/_unsupervised.py#L39)
Compute the mean Silhouette Coefficient of all samples.
The Silhouette Coefficient is calculated using the mean intra-cluster distance (`a`) and the mean nearest-cluster distance (`b`) for each sample. The Silhouette Coefficient for a sample is `(b - a) / max(a,
b)`. To clarify, `b` is the distance between a sample and the nearest cluster that the sample is not a part of. Note that Silhouette Coefficient is only defined if number of labels is `2 <= n_labels <= n_samples - 1`.
This function returns the mean Silhouette Coefficient over all samples. To obtain the values for each sample, use [`silhouette_samples`](sklearn.metrics.silhouette_samples#sklearn.metrics.silhouette_samples "sklearn.metrics.silhouette_samples").
The best value is 1 and the worst value is -1. Values near 0 indicate overlapping clusters. Negative values generally indicate that a sample has been assigned to the wrong cluster, as a different cluster is more similar.
Read more in the [User Guide](../clustering#silhouette-coefficient).
Parameters:
**X**array-like of shape (n\_samples\_a, n\_samples\_a) if metric == “precomputed” or (n\_samples\_a, n\_features) otherwise
An array of pairwise distances between samples, or a feature array.
**labels**array-like of shape (n\_samples,)
Predicted labels for each sample.
**metric**str or callable, default=’euclidean’
The metric to use when calculating distance between instances in a feature array. If metric is a string, it must be one of the options allowed by `metrics.pairwise.pairwise_distances`. If `X` is the distance array itself, use `metric="precomputed"`.
**sample\_size**int, default=None
The size of the sample to use when computing the Silhouette Coefficient on a random subset of the data. If `sample_size is None`, no sampling is used.
**random\_state**int, RandomState instance or None, default=None
Determines random number generation for selecting a subset of samples. Used when `sample_size is not None`. Pass an int for reproducible results across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**\*\*kwds**optional keyword parameters
Any further parameters are passed directly to the distance function. If using a scipy.spatial.distance metric, the parameters are still metric dependent. See the scipy docs for usage examples.
Returns:
**silhouette**float
Mean Silhouette Coefficient for all samples.
#### References
[1] [Peter J. Rousseeuw (1987). “Silhouettes: a Graphical Aid to the Interpretation and Validation of Cluster Analysis”. Computational and Applied Mathematics 20: 53-65.](https://www.sciencedirect.com/science/article/pii/0377042787901257)
[2] [Wikipedia entry on the Silhouette Coefficient](https://en.wikipedia.org/wiki/Silhouette_(clustering))
Examples using `sklearn.metrics.silhouette_score`
-------------------------------------------------
[A demo of K-Means clustering on the handwritten digits data](../../auto_examples/cluster/plot_kmeans_digits#sphx-glr-auto-examples-cluster-plot-kmeans-digits-py)
[Demo of DBSCAN clustering algorithm](../../auto_examples/cluster/plot_dbscan#sphx-glr-auto-examples-cluster-plot-dbscan-py)
[Demo of affinity propagation clustering algorithm](../../auto_examples/cluster/plot_affinity_propagation#sphx-glr-auto-examples-cluster-plot-affinity-propagation-py)
[Selecting the number of clusters with silhouette analysis on KMeans clustering](../../auto_examples/cluster/plot_kmeans_silhouette_analysis#sphx-glr-auto-examples-cluster-plot-kmeans-silhouette-analysis-py)
[Clustering text documents using k-means](../../auto_examples/text/plot_document_clustering#sphx-glr-auto-examples-text-plot-document-clustering-py)
scikit_learn sklearn.covariance.OAS sklearn.covariance.OAS
======================
*class*sklearn.covariance.OAS(*\**, *store\_precision=True*, *assume\_centered=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_shrunk_covariance.py#L553)
Oracle Approximating Shrinkage Estimator.
Read more in the [User Guide](../covariance#shrunk-covariance).
OAS is a particular form of shrinkage described in “Shrinkage Algorithms for MMSE Covariance Estimation” Chen et al., IEEE Trans. on Sign. Proc., Volume 58, Issue 10, October 2010.
The formula used here does not correspond to the one given in the article. In the original article, formula (23) states that 2/p is multiplied by Trace(cov\*cov) in both the numerator and denominator, but this operation is omitted because for a large p, the value of 2/p is so small that it doesn’t affect the value of the estimator.
Parameters:
**store\_precision**bool, default=True
Specify if the estimated precision is stored.
**assume\_centered**bool, default=False
If True, data will not be centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False (default), data will be centered before computation.
Attributes:
**covariance\_**ndarray of shape (n\_features, n\_features)
Estimated covariance matrix.
**location\_**ndarray of shape (n\_features,)
Estimated location, i.e. the estimated mean.
**precision\_**ndarray of shape (n\_features, n\_features)
Estimated pseudo inverse matrix. (stored only if store\_precision is True)
**shrinkage\_**float
coefficient in the convex combination used for the computation of the shrunk estimate. Range is [0, 1].
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`EllipticEnvelope`](sklearn.covariance.ellipticenvelope#sklearn.covariance.EllipticEnvelope "sklearn.covariance.EllipticEnvelope")
An object for detecting outliers in a Gaussian distributed dataset.
[`EmpiricalCovariance`](sklearn.covariance.empiricalcovariance#sklearn.covariance.EmpiricalCovariance "sklearn.covariance.EmpiricalCovariance")
Maximum likelihood covariance estimator.
[`GraphicalLasso`](sklearn.covariance.graphicallasso#sklearn.covariance.GraphicalLasso "sklearn.covariance.GraphicalLasso")
Sparse inverse covariance estimation with an l1-penalized estimator.
[`GraphicalLassoCV`](sklearn.covariance.graphicallassocv#sklearn.covariance.GraphicalLassoCV "sklearn.covariance.GraphicalLassoCV")
Sparse inverse covariance with cross-validated choice of the l1 penalty.
[`LedoitWolf`](sklearn.covariance.ledoitwolf#sklearn.covariance.LedoitWolf "sklearn.covariance.LedoitWolf")
LedoitWolf Estimator.
[`MinCovDet`](sklearn.covariance.mincovdet#sklearn.covariance.MinCovDet "sklearn.covariance.MinCovDet")
Minimum Covariance Determinant (robust estimator of covariance).
[`ShrunkCovariance`](sklearn.covariance.shrunkcovariance#sklearn.covariance.ShrunkCovariance "sklearn.covariance.ShrunkCovariance")
Covariance estimator with shrinkage.
#### Notes
The regularised covariance is:
(1 - shrinkage) \* cov + shrinkage \* mu \* np.identity(n\_features)
where mu = trace(cov) / n\_features and shrinkage is given by the OAS formula (see References)
#### References
“Shrinkage Algorithms for MMSE Covariance Estimation” Chen et al., IEEE Trans. on Sign. Proc., Volume 58, Issue 10, October 2010.
#### Examples
```
>>> import numpy as np
>>> from sklearn.covariance import OAS
>>> from sklearn.datasets import make_gaussian_quantiles
>>> real_cov = np.array([[.8, .3],
... [.3, .4]])
>>> rng = np.random.RandomState(0)
>>> X = rng.multivariate_normal(mean=[0, 0],
... cov=real_cov,
... size=500)
>>> oas = OAS().fit(X)
>>> oas.covariance_
array([[0.7533..., 0.2763...],
[0.2763..., 0.3964...]])
>>> oas.precision_
array([[ 1.7833..., -1.2431... ],
[-1.2431..., 3.3889...]])
>>> oas.shrinkage_
0.0195...
```
#### Methods
| | |
| --- | --- |
| [`error_norm`](#sklearn.covariance.OAS.error_norm "sklearn.covariance.OAS.error_norm")(comp\_cov[, norm, scaling, squared]) | Compute the Mean Squared Error between two covariance estimators. |
| [`fit`](#sklearn.covariance.OAS.fit "sklearn.covariance.OAS.fit")(X[, y]) | Fit the Oracle Approximating Shrinkage covariance model to X. |
| [`get_params`](#sklearn.covariance.OAS.get_params "sklearn.covariance.OAS.get_params")([deep]) | Get parameters for this estimator. |
| [`get_precision`](#sklearn.covariance.OAS.get_precision "sklearn.covariance.OAS.get_precision")() | Getter for the precision matrix. |
| [`mahalanobis`](#sklearn.covariance.OAS.mahalanobis "sklearn.covariance.OAS.mahalanobis")(X) | Compute the squared Mahalanobis distances of given observations. |
| [`score`](#sklearn.covariance.OAS.score "sklearn.covariance.OAS.score")(X\_test[, y]) | Compute the log-likelihood of `X_test` under the estimated Gaussian model. |
| [`set_params`](#sklearn.covariance.OAS.set_params "sklearn.covariance.OAS.set_params")(\*\*params) | Set the parameters of this estimator. |
error\_norm(*comp\_cov*, *norm='frobenius'*, *scaling=True*, *squared=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_empirical_covariance.py#L267)
Compute the Mean Squared Error between two covariance estimators.
Parameters:
**comp\_cov**array-like of shape (n\_features, n\_features)
The covariance to compare with.
**norm**{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error `(comp_cov - self.covariance_)`.
**scaling**bool, default=True
If True (default), the squared error norm is divided by n\_features. If False, the squared error norm is not rescaled.
**squared**bool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned.
Returns:
**result**float
The Mean Squared Error (in the sense of the Frobenius norm) between `self` and `comp_cov` covariance estimators.
fit(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_shrunk_covariance.py#L656)
Fit the Oracle Approximating Shrinkage covariance model to X.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training data, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**self**object
Returns the instance itself.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
get\_precision()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_empirical_covariance.py#L195)
Getter for the precision matrix.
Returns:
**precision\_**array-like of shape (n\_features, n\_features)
The precision matrix associated to the current covariance object.
mahalanobis(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_empirical_covariance.py#L318)
Compute the squared Mahalanobis distances of given observations.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit.
Returns:
**dist**ndarray of shape (n\_samples,)
Squared Mahalanobis distances of the observations.
score(*X\_test*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_empirical_covariance.py#L236)
Compute the log-likelihood of `X_test` under the estimated Gaussian model.
The Gaussian model is defined by its mean and covariance matrix which are represented respectively by `self.location_` and `self.covariance_`.
Parameters:
**X\_test**array-like of shape (n\_samples, n\_features)
Test data of which we compute the likelihood, where `n_samples` is the number of samples and `n_features` is the number of features. `X_test` is assumed to be drawn from the same distribution than the data used in fit (including centering).
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**res**float
The log-likelihood of `X_test` with `self.location_` and `self.covariance_` as estimators of the Gaussian model mean and covariance matrix respectively.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.covariance.OAS`
---------------------------------------
[Normal, Ledoit-Wolf and OAS Linear Discriminant Analysis for classification](../../auto_examples/classification/plot_lda#sphx-glr-auto-examples-classification-plot-lda-py)
[Ledoit-Wolf vs OAS estimation](../../auto_examples/covariance/plot_lw_vs_oas#sphx-glr-auto-examples-covariance-plot-lw-vs-oas-py)
[Shrinkage covariance estimation: LedoitWolf vs OAS and max-likelihood](../../auto_examples/covariance/plot_covariance_estimation#sphx-glr-auto-examples-covariance-plot-covariance-estimation-py)
scikit_learn sklearn.preprocessing.QuantileTransformer sklearn.preprocessing.QuantileTransformer
=========================================
*class*sklearn.preprocessing.QuantileTransformer(*\**, *n\_quantiles=1000*, *output\_distribution='uniform'*, *ignore\_implicit\_zeros=False*, *subsample=100000*, *random\_state=None*, *copy=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_data.py#L2335)
Transform features using quantiles information.
This method transforms the features to follow a uniform or a normal distribution. Therefore, for a given feature, this transformation tends to spread out the most frequent values. It also reduces the impact of (marginal) outliers: this is therefore a robust preprocessing scheme.
The transformation is applied on each feature independently. First an estimate of the cumulative distribution function of a feature is used to map the original values to a uniform distribution. The obtained values are then mapped to the desired output distribution using the associated quantile function. Features values of new/unseen data that fall below or above the fitted range will be mapped to the bounds of the output distribution. Note that this transform is non-linear. It may distort linear correlations between variables measured at the same scale but renders variables measured at different scales more directly comparable.
Read more in the [User Guide](../preprocessing#preprocessing-transformer).
New in version 0.19.
Parameters:
**n\_quantiles**int, default=1000 or n\_samples
Number of quantiles to be computed. It corresponds to the number of landmarks used to discretize the cumulative distribution function. If n\_quantiles is larger than the number of samples, n\_quantiles is set to the number of samples as a larger number of quantiles does not give a better approximation of the cumulative distribution function estimator.
**output\_distribution**{‘uniform’, ‘normal’}, default=’uniform’
Marginal distribution for the transformed data. The choices are ‘uniform’ (default) or ‘normal’.
**ignore\_implicit\_zeros**bool, default=False
Only applies to sparse matrices. If True, the sparse entries of the matrix are discarded to compute the quantile statistics. If False, these entries are treated as zeros.
**subsample**int, default=1e5
Maximum number of samples used to estimate the quantiles for computational efficiency. Note that the subsampling procedure may differ for value-identical sparse and dense matrices.
**random\_state**int, RandomState instance or None, default=None
Determines random number generation for subsampling and smoothing noise. Please see `subsample` for more details. Pass an int for reproducible results across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**copy**bool, default=True
Set to False to perform inplace transformation and avoid a copy (if the input is already a numpy array).
Attributes:
**n\_quantiles\_**int
The actual number of quantiles used to discretize the cumulative distribution function.
**quantiles\_**ndarray of shape (n\_quantiles, n\_features)
The values corresponding the quantiles of reference.
**references\_**ndarray of shape (n\_quantiles, )
Quantiles of references.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`quantile_transform`](sklearn.preprocessing.quantile_transform#sklearn.preprocessing.quantile_transform "sklearn.preprocessing.quantile_transform")
Equivalent function without the estimator API.
[`PowerTransformer`](sklearn.preprocessing.powertransformer#sklearn.preprocessing.PowerTransformer "sklearn.preprocessing.PowerTransformer")
Perform mapping to a normal distribution using a power transform.
[`StandardScaler`](sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler")
Perform standardization that is faster, but less robust to outliers.
[`RobustScaler`](sklearn.preprocessing.robustscaler#sklearn.preprocessing.RobustScaler "sklearn.preprocessing.RobustScaler")
Perform robust standardization that removes the influence of outliers but does not put outliers and inliers on the same scale.
#### Notes
NaNs are treated as missing values: disregarded in fit, and maintained in transform.
For a comparison of the different scalers, transformers, and normalizers, see [examples/preprocessing/plot\_all\_scaling.py](../../auto_examples/preprocessing/plot_all_scaling#sphx-glr-auto-examples-preprocessing-plot-all-scaling-py).
#### Examples
```
>>> import numpy as np
>>> from sklearn.preprocessing import QuantileTransformer
>>> rng = np.random.RandomState(0)
>>> X = np.sort(rng.normal(loc=0.5, scale=0.25, size=(25, 1)), axis=0)
>>> qt = QuantileTransformer(n_quantiles=10, random_state=0)
>>> qt.fit_transform(X)
array([...])
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.preprocessing.QuantileTransformer.fit "sklearn.preprocessing.QuantileTransformer.fit")(X[, y]) | Compute the quantiles used for transforming. |
| [`fit_transform`](#sklearn.preprocessing.QuantileTransformer.fit_transform "sklearn.preprocessing.QuantileTransformer.fit_transform")(X[, y]) | Fit to data, then transform it. |
| [`get_feature_names_out`](#sklearn.preprocessing.QuantileTransformer.get_feature_names_out "sklearn.preprocessing.QuantileTransformer.get_feature_names_out")([input\_features]) | Get output feature names for transformation. |
| [`get_params`](#sklearn.preprocessing.QuantileTransformer.get_params "sklearn.preprocessing.QuantileTransformer.get_params")([deep]) | Get parameters for this estimator. |
| [`inverse_transform`](#sklearn.preprocessing.QuantileTransformer.inverse_transform "sklearn.preprocessing.QuantileTransformer.inverse_transform")(X) | Back-projection to the original space. |
| [`set_params`](#sklearn.preprocessing.QuantileTransformer.set_params "sklearn.preprocessing.QuantileTransformer.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.preprocessing.QuantileTransformer.transform "sklearn.preprocessing.QuantileTransformer.transform")(X) | Feature-wise transformation of the data. |
fit(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_data.py#L2539)
Compute the quantiles used for transforming.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The data used to scale along the features axis. If a sparse matrix is provided, it will be converted into a sparse `csc_matrix`. Additionally, the sparse matrix needs to be nonnegative if `ignore_implicit_zeros` is False.
**y**None
Ignored.
Returns:
**self**object
Fitted transformer.
fit\_transform(*X*, *y=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L839)
Fit to data, then transform it.
Fits transformer to `X` and `y` with optional parameters `fit_params` and returns a transformed version of `X`.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs), default=None
Target values (None for unsupervised transformations).
**\*\*fit\_params**dict
Additional fit parameters.
Returns:
**X\_new**ndarray array of shape (n\_samples, n\_features\_new)
Transformed array.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L880)
Get output feature names for transformation.
Parameters:
**input\_features**array-like of str or None, default=None
Input features.
* If `input_features` is `None`, then `feature_names_in_` is used as feature names in. If `feature_names_in_` is not defined, then the following input feature names are generated: `["x0", "x1", ..., "x(n_features_in_ - 1)"]`.
* If `input_features` is an array-like, then `input_features` must match `feature_names_in_` if `feature_names_in_` is defined.
Returns:
**feature\_names\_out**ndarray of str objects
Same as input features.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
inverse\_transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_data.py#L2749)
Back-projection to the original space.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The data used to scale along the features axis. If a sparse matrix is provided, it will be converted into a sparse `csc_matrix`. Additionally, the sparse matrix needs to be nonnegative if `ignore_implicit_zeros` is False.
Returns:
**Xt**{ndarray, sparse matrix} of (n\_samples, n\_features)
The projected data.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_data.py#L2728)
Feature-wise transformation of the data.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The data used to scale along the features axis. If a sparse matrix is provided, it will be converted into a sparse `csc_matrix`. Additionally, the sparse matrix needs to be nonnegative if `ignore_implicit_zeros` is False.
Returns:
**Xt**{ndarray, sparse matrix} of shape (n\_samples, n\_features)
The projected data.
Examples using `sklearn.preprocessing.QuantileTransformer`
----------------------------------------------------------
[Partial Dependence and Individual Conditional Expectation Plots](../../auto_examples/inspection/plot_partial_dependence#sphx-glr-auto-examples-inspection-plot-partial-dependence-py)
[Effect of transforming the targets in regression model](../../auto_examples/compose/plot_transformed_target#sphx-glr-auto-examples-compose-plot-transformed-target-py)
[Compare the effect of different scalers on data with outliers](../../auto_examples/preprocessing/plot_all_scaling#sphx-glr-auto-examples-preprocessing-plot-all-scaling-py)
[Map data to a normal distribution](../../auto_examples/preprocessing/plot_map_data_to_normal#sphx-glr-auto-examples-preprocessing-plot-map-data-to-normal-py)
| programming_docs |
scikit_learn sklearn.metrics.precision_recall_curve sklearn.metrics.precision\_recall\_curve
========================================
sklearn.metrics.precision\_recall\_curve(*y\_true*, *probas\_pred*, *\**, *pos\_label=None*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/_ranking.py#L788)
Compute precision-recall pairs for different probability thresholds.
Note: this implementation is restricted to the binary classification task.
The precision is the ratio `tp / (tp + fp)` where `tp` is the number of true positives and `fp` the number of false positives. The precision is intuitively the ability of the classifier not to label as positive a sample that is negative.
The recall is the ratio `tp / (tp + fn)` where `tp` is the number of true positives and `fn` the number of false negatives. The recall is intuitively the ability of the classifier to find all the positive samples.
The last precision and recall values are 1. and 0. respectively and do not have a corresponding threshold. This ensures that the graph starts on the y axis.
The first precision and recall values are precision=class balance and recall=1.0 which corresponds to a classifier that always predicts the positive class.
Read more in the [User Guide](../model_evaluation#precision-recall-f-measure-metrics).
Parameters:
**y\_true**ndarray of shape (n\_samples,)
True binary labels. If labels are not either {-1, 1} or {0, 1}, then pos\_label should be explicitly given.
**probas\_pred**ndarray of shape (n\_samples,)
Target scores, can either be probability estimates of the positive class, or non-thresholded measure of decisions (as returned by `decision_function` on some classifiers).
**pos\_label**int or str, default=None
The label of the positive class. When `pos_label=None`, if y\_true is in {-1, 1} or {0, 1}, `pos_label` is set to 1, otherwise an error will be raised.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**precision**ndarray of shape (n\_thresholds + 1,)
Precision values such that element i is the precision of predictions with score >= thresholds[i] and the last element is 1.
**recall**ndarray of shape (n\_thresholds + 1,)
Decreasing recall values such that element i is the recall of predictions with score >= thresholds[i] and the last element is 0.
**thresholds**ndarray of shape (n\_thresholds,)
Increasing thresholds on the decision function used to compute precision and recall where `n_thresholds = len(np.unique(probas_pred))`.
See also
[`PrecisionRecallDisplay.from_estimator`](sklearn.metrics.precisionrecalldisplay#sklearn.metrics.PrecisionRecallDisplay.from_estimator "sklearn.metrics.PrecisionRecallDisplay.from_estimator")
Plot Precision Recall Curve given a binary classifier.
[`PrecisionRecallDisplay.from_predictions`](sklearn.metrics.precisionrecalldisplay#sklearn.metrics.PrecisionRecallDisplay.from_predictions "sklearn.metrics.PrecisionRecallDisplay.from_predictions")
Plot Precision Recall Curve using predictions from a binary classifier.
[`average_precision_score`](sklearn.metrics.average_precision_score#sklearn.metrics.average_precision_score "sklearn.metrics.average_precision_score")
Compute average precision from prediction scores.
[`det_curve`](sklearn.metrics.det_curve#sklearn.metrics.det_curve "sklearn.metrics.det_curve")
Compute error rates for different probability thresholds.
[`roc_curve`](sklearn.metrics.roc_curve#sklearn.metrics.roc_curve "sklearn.metrics.roc_curve")
Compute Receiver operating characteristic (ROC) curve.
#### Examples
```
>>> import numpy as np
>>> from sklearn.metrics import precision_recall_curve
>>> y_true = np.array([0, 0, 1, 1])
>>> y_scores = np.array([0.1, 0.4, 0.35, 0.8])
>>> precision, recall, thresholds = precision_recall_curve(
... y_true, y_scores)
>>> precision
array([0.5 , 0.66666667, 0.5 , 1. , 1. ])
>>> recall
array([1. , 1. , 0.5, 0.5, 0. ])
>>> thresholds
array([0.1 , 0.35, 0.4 , 0.8 ])
```
Examples using `sklearn.metrics.precision_recall_curve`
-------------------------------------------------------
[Visualizations with Display Objects](../../auto_examples/miscellaneous/plot_display_object_visualization#sphx-glr-auto-examples-miscellaneous-plot-display-object-visualization-py)
[Precision-Recall](../../auto_examples/model_selection/plot_precision_recall#sphx-glr-auto-examples-model-selection-plot-precision-recall-py)
scikit_learn sklearn.feature_selection.VarianceThreshold sklearn.feature\_selection.VarianceThreshold
============================================
*class*sklearn.feature\_selection.VarianceThreshold(*threshold=0.0*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_variance_threshold.py#L11)
Feature selector that removes all low-variance features.
This feature selection algorithm looks only at the features (X), not the desired outputs (y), and can thus be used for unsupervised learning.
Read more in the [User Guide](../feature_selection#variance-threshold).
Parameters:
**threshold**float, default=0
Features with a training-set variance lower than this threshold will be removed. The default is to keep all features with non-zero variance, i.e. remove the features that have the same value in all samples.
Attributes:
**variances\_**array, shape (n\_features,)
Variances of individual features.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`SelectFromModel`](sklearn.feature_selection.selectfrommodel#sklearn.feature_selection.SelectFromModel "sklearn.feature_selection.SelectFromModel")
Meta-transformer for selecting features based on importance weights.
[`SelectPercentile`](sklearn.feature_selection.selectpercentile#sklearn.feature_selection.SelectPercentile "sklearn.feature_selection.SelectPercentile")
Select features according to a percentile of the highest scores.
[`SequentialFeatureSelector`](sklearn.feature_selection.sequentialfeatureselector#sklearn.feature_selection.SequentialFeatureSelector "sklearn.feature_selection.SequentialFeatureSelector")
Transformer that performs Sequential Feature Selection.
#### Notes
Allows NaN in the input. Raises ValueError if no feature in X meets the variance threshold.
#### Examples
The following dataset has integer features, two of which are the same in every sample. These are removed with the default setting for threshold:
```
>>> from sklearn.feature_selection import VarianceThreshold
>>> X = [[0, 2, 0, 3], [0, 1, 4, 3], [0, 1, 1, 3]]
>>> selector = VarianceThreshold()
>>> selector.fit_transform(X)
array([[2, 0],
[1, 4],
[1, 1]])
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.feature_selection.VarianceThreshold.fit "sklearn.feature_selection.VarianceThreshold.fit")(X[, y]) | Learn empirical variances from X. |
| [`fit_transform`](#sklearn.feature_selection.VarianceThreshold.fit_transform "sklearn.feature_selection.VarianceThreshold.fit_transform")(X[, y]) | Fit to data, then transform it. |
| [`get_feature_names_out`](#sklearn.feature_selection.VarianceThreshold.get_feature_names_out "sklearn.feature_selection.VarianceThreshold.get_feature_names_out")([input\_features]) | Mask feature names according to selected features. |
| [`get_params`](#sklearn.feature_selection.VarianceThreshold.get_params "sklearn.feature_selection.VarianceThreshold.get_params")([deep]) | Get parameters for this estimator. |
| [`get_support`](#sklearn.feature_selection.VarianceThreshold.get_support "sklearn.feature_selection.VarianceThreshold.get_support")([indices]) | Get a mask, or integer index, of the features selected. |
| [`inverse_transform`](#sklearn.feature_selection.VarianceThreshold.inverse_transform "sklearn.feature_selection.VarianceThreshold.inverse_transform")(X) | Reverse the transformation operation. |
| [`set_params`](#sklearn.feature_selection.VarianceThreshold.set_params "sklearn.feature_selection.VarianceThreshold.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.feature_selection.VarianceThreshold.transform "sklearn.feature_selection.VarianceThreshold.transform")(X) | Reduce X to the selected features. |
fit(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_variance_threshold.py#L73)
Learn empirical variances from X.
Parameters:
**X**{array-like, sparse matrix}, shape (n\_samples, n\_features)
Data from which to compute variances, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**any, default=None
Ignored. This parameter exists only for compatibility with sklearn.pipeline.Pipeline.
Returns:
**self**object
Returns the instance itself.
fit\_transform(*X*, *y=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L839)
Fit to data, then transform it.
Fits transformer to `X` and `y` with optional parameters `fit_params` and returns a transformed version of `X`.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs), default=None
Target values (None for unsupervised transformations).
**\*\*fit\_params**dict
Additional fit parameters.
Returns:
**X\_new**ndarray array of shape (n\_samples, n\_features\_new)
Transformed array.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_base.py#L146)
Mask feature names according to selected features.
Parameters:
**input\_features**array-like of str or None, default=None
Input features.
* If `input_features` is `None`, then `feature_names_in_` is used as feature names in. If `feature_names_in_` is not defined, then the following input feature names are generated: `["x0", "x1", ..., "x(n_features_in_ - 1)"]`.
* If `input_features` is an array-like, then `input_features` must match `feature_names_in_` if `feature_names_in_` is defined.
Returns:
**feature\_names\_out**ndarray of str objects
Transformed feature names.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
get\_support(*indices=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_base.py#L33)
Get a mask, or integer index, of the features selected.
Parameters:
**indices**bool, default=False
If True, the return value will be an array of integers, rather than a boolean mask.
Returns:
**support**array
An index that selects the retained features from a feature vector. If `indices` is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If `indices` is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector.
inverse\_transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_base.py#L106)
Reverse the transformation operation.
Parameters:
**X**array of shape [n\_samples, n\_selected\_features]
The input samples.
Returns:
**X\_r**array of shape [n\_samples, n\_original\_features]
`X` with columns of zeros inserted where features would have been removed by [`transform`](#sklearn.feature_selection.VarianceThreshold.transform "sklearn.feature_selection.VarianceThreshold.transform").
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_base.py#L68)
Reduce X to the selected features.
Parameters:
**X**array of shape [n\_samples, n\_features]
The input samples.
Returns:
**X\_r**array of shape [n\_samples, n\_selected\_features]
The input samples with only the selected features.
scikit_learn sklearn.model_selection.HalvingRandomSearchCV sklearn.model\_selection.HalvingRandomSearchCV
==============================================
*class*sklearn.model\_selection.HalvingRandomSearchCV(*estimator*, *param\_distributions*, *\**, *n\_candidates='exhaust'*, *factor=3*, *resource='n\_samples'*, *max\_resources='auto'*, *min\_resources='smallest'*, *aggressive\_elimination=False*, *cv=5*, *scoring=None*, *refit=True*, *error\_score=nan*, *return\_train\_score=True*, *random\_state=None*, *n\_jobs=None*, *verbose=0*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search_successive_halving.py#L721)
Randomized search on hyper parameters.
The search strategy starts evaluating all the candidates with a small amount of resources and iteratively selects the best candidates, using more and more resources.
The candidates are sampled at random from the parameter space and the number of sampled candidates is determined by `n_candidates`.
Read more in the [User guide](../grid_search#successive-halving-user-guide).
Note
This estimator is still **experimental** for now: the predictions and the API might change without any deprecation cycle. To use it, you need to explicitly import `enable_halving_search_cv`:
```
>>> # explicitly require this experimental feature
>>> from sklearn.experimental import enable_halving_search_cv # noqa
>>> # now you can import normally from model_selection
>>> from sklearn.model_selection import HalvingRandomSearchCV
```
Parameters:
**estimator**estimator object
This is assumed to implement the scikit-learn estimator interface. Either estimator needs to provide a `score` function, or `scoring` must be passed.
**param\_distributions**dict
Dictionary with parameters names (string) as keys and distributions or lists of parameters to try. Distributions must provide a `rvs` method for sampling (such as those from scipy.stats.distributions). If a list is given, it is sampled uniformly.
**n\_candidates**int, default=’exhaust’
The number of candidate parameters to sample, at the first iteration. Using ‘exhaust’ will sample enough candidates so that the last iteration uses as many resources as possible, based on `min_resources`, `max_resources` and `factor`. In this case, `min_resources` cannot be ‘exhaust’.
**factor**int or float, default=3
The ‘halving’ parameter, which determines the proportion of candidates that are selected for each subsequent iteration. For example, `factor=3` means that only one third of the candidates are selected.
**resource**`'n_samples'` or str, default=’n\_samples’
Defines the resource that increases with each iteration. By default, the resource is the number of samples. It can also be set to any parameter of the base estimator that accepts positive integer values, e.g. ‘n\_iterations’ or ‘n\_estimators’ for a gradient boosting estimator. In this case `max_resources` cannot be ‘auto’ and must be set explicitly.
**max\_resources**int, default=’auto’
The maximum number of resources that any candidate is allowed to use for a given iteration. By default, this is set `n_samples` when `resource='n_samples'` (default), else an error is raised.
**min\_resources**{‘exhaust’, ‘smallest’} or int, default=’smallest’
The minimum amount of resource that any candidate is allowed to use for a given iteration. Equivalently, this defines the amount of resources `r0` that are allocated for each candidate at the first iteration.
* ‘smallest’ is a heuristic that sets `r0` to a small value:
+ `n_splits * 2` when `resource='n_samples'` for a regression problem
+ `n_classes * n_splits * 2` when `resource='n_samples'` for a classification problem
+ `1` when `resource != 'n_samples'`
* ‘exhaust’ will set `r0` such that the **last** iteration uses as much resources as possible. Namely, the last iteration will use the highest value smaller than `max_resources` that is a multiple of both `min_resources` and `factor`. In general, using ‘exhaust’ leads to a more accurate estimator, but is slightly more time consuming. ‘exhaust’ isn’t available when `n_candidates='exhaust'`.
Note that the amount of resources used at each iteration is always a multiple of `min_resources`.
**aggressive\_elimination**bool, default=False
This is only relevant in cases where there isn’t enough resources to reduce the remaining candidates to at most `factor` after the last iteration. If `True`, then the search process will ‘replay’ the first iteration for as long as needed until the number of candidates is small enough. This is `False` by default, which means that the last iteration may evaluate more than `factor` candidates. See [Aggressive elimination of candidates](../grid_search#aggressive-elimination) for more details.
**cv**int, cross-validation generator or an iterable, default=5
Determines the cross-validation splitting strategy. Possible inputs for cv are:
* integer, to specify the number of folds in a `(Stratified)KFold`,
* [CV splitter](https://scikit-learn.org/1.1/glossary.html#term-CV-splitter),
* An iterable yielding (train, test) splits as arrays of indices.
For integer/None inputs, if the estimator is a classifier and `y` is either binary or multiclass, [`StratifiedKFold`](sklearn.model_selection.stratifiedkfold#sklearn.model_selection.StratifiedKFold "sklearn.model_selection.StratifiedKFold") is used. In all other cases, [`KFold`](sklearn.model_selection.kfold#sklearn.model_selection.KFold "sklearn.model_selection.KFold") is used. These splitters are instantiated with `shuffle=False` so the splits will be the same across calls.
Refer [User Guide](../cross_validation#cross-validation) for the various cross-validation strategies that can be used here.
Note
Due to implementation details, the folds produced by `cv` must be the same across multiple calls to `cv.split()`. For built-in `scikit-learn` iterators, this can be achieved by deactivating shuffling (`shuffle=False`), or by setting the `cv`’s `random_state` parameter to an integer.
**scoring**str, callable, or None, default=None
A single string (see [The scoring parameter: defining model evaluation rules](../model_evaluation#scoring-parameter)) or a callable (see [Defining your scoring strategy from metric functions](../model_evaluation#scoring)) to evaluate the predictions on the test set. If None, the estimator’s score method is used.
**refit**bool, default=True
If True, refit an estimator using the best found parameters on the whole dataset.
The refitted estimator is made available at the `best_estimator_` attribute and permits using `predict` directly on this `HalvingRandomSearchCV` instance.
**error\_score**‘raise’ or numeric
Value to assign to the score if an error occurs in estimator fitting. If set to ‘raise’, the error is raised. If a numeric value is given, FitFailedWarning is raised. This parameter does not affect the refit step, which will always raise the error. Default is `np.nan`.
**return\_train\_score**bool, default=False
If `False`, the `cv_results_` attribute will not include training scores. Computing training scores is used to get insights on how different parameter settings impact the overfitting/underfitting trade-off. However computing the scores on the training set can be computationally expensive and is not strictly required to select the parameters that yield the best generalization performance.
**random\_state**int, RandomState instance or None, default=None
Pseudo random number generator state used for subsampling the dataset when `resources != 'n_samples'`. Also used for random uniform sampling from lists of possible values instead of scipy.stats distributions. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**n\_jobs**int or None, default=None
Number of jobs to run in parallel. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
**verbose**int
Controls the verbosity: the higher, the more messages.
Attributes:
**n\_resources\_**list of int
The amount of resources used at each iteration.
**n\_candidates\_**list of int
The number of candidate parameters that were evaluated at each iteration.
**n\_remaining\_candidates\_**int
The number of candidate parameters that are left after the last iteration. It corresponds to `ceil(n_candidates[-1] / factor)`
**max\_resources\_**int
The maximum number of resources that any candidate is allowed to use for a given iteration. Note that since the number of resources used at each iteration must be a multiple of `min_resources_`, the actual number of resources used at the last iteration may be smaller than `max_resources_`.
**min\_resources\_**int
The amount of resources that are allocated for each candidate at the first iteration.
**n\_iterations\_**int
The actual number of iterations that were run. This is equal to `n_required_iterations_` if `aggressive_elimination` is `True`. Else, this is equal to `min(n_possible_iterations_,
n_required_iterations_)`.
**n\_possible\_iterations\_**int
The number of iterations that are possible starting with `min_resources_` resources and without exceeding `max_resources_`.
**n\_required\_iterations\_**int
The number of iterations that are required to end up with less than `factor` candidates at the last iteration, starting with `min_resources_` resources. This will be smaller than `n_possible_iterations_` when there isn’t enough resources.
**cv\_results\_**dict of numpy (masked) ndarrays
A dict with keys as column headers and values as columns, that can be imported into a pandas `DataFrame`. It contains lots of information for analysing the results of a search. Please refer to the [User guide](../grid_search#successive-halving-cv-results) for details.
**best\_estimator\_**estimator or dict
Estimator that was chosen by the search, i.e. estimator which gave highest score (or smallest loss if specified) on the left out data. Not available if `refit=False`.
**best\_score\_**float
Mean cross-validated score of the best\_estimator.
**best\_params\_**dict
Parameter setting that gave the best results on the hold out data.
**best\_index\_**int
The index (of the `cv_results_` arrays) which corresponds to the best candidate parameter setting.
The dict at `search.cv_results_['params'][search.best_index_]` gives the parameter setting for the best model, that gives the highest mean score (`search.best_score_`).
**scorer\_**function or a dict
Scorer function used on the held out data to choose the best parameters for the model.
**n\_splits\_**int
The number of cross-validation splits (folds/iterations).
**refit\_time\_**float
Seconds used for refitting the best model on the whole dataset.
This is present only if `refit` is not False.
**multimetric\_**bool
Whether or not the scorers compute several metrics.
[`classes_`](#sklearn.model_selection.HalvingRandomSearchCV.classes_ "sklearn.model_selection.HalvingRandomSearchCV.classes_")ndarray of shape (n\_classes,)
Class labels.
[`n_features_in_`](#sklearn.model_selection.HalvingRandomSearchCV.n_features_in_ "sklearn.model_selection.HalvingRandomSearchCV.n_features_in_")int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Only defined if `best_estimator_` is defined (see the documentation for the `refit` parameter for more details) and that `best_estimator_` exposes `feature_names_in_` when fit.
New in version 1.0.
See also
[`HalvingGridSearchCV`](sklearn.model_selection.halvinggridsearchcv#sklearn.model_selection.HalvingGridSearchCV "sklearn.model_selection.HalvingGridSearchCV")
Search over a grid of parameters using successive halving.
#### Notes
The parameters selected are those that maximize the score of the held-out data, according to the scoring parameter.
#### Examples
```
>>> from sklearn.datasets import load_iris
>>> from sklearn.ensemble import RandomForestClassifier
>>> from sklearn.experimental import enable_halving_search_cv # noqa
>>> from sklearn.model_selection import HalvingRandomSearchCV
>>> from scipy.stats import randint
>>> import numpy as np
...
>>> X, y = load_iris(return_X_y=True)
>>> clf = RandomForestClassifier(random_state=0)
>>> np.random.seed(0)
...
>>> param_distributions = {"max_depth": [3, None],
... "min_samples_split": randint(2, 11)}
>>> search = HalvingRandomSearchCV(clf, param_distributions,
... resource='n_estimators',
... max_resources=10,
... random_state=0).fit(X, y)
>>> search.best_params_
{'max_depth': None, 'min_samples_split': 10, 'n_estimators': 9}
```
#### Methods
| | |
| --- | --- |
| [`decision_function`](#sklearn.model_selection.HalvingRandomSearchCV.decision_function "sklearn.model_selection.HalvingRandomSearchCV.decision_function")(X) | Call decision\_function on the estimator with the best found parameters. |
| [`fit`](#sklearn.model_selection.HalvingRandomSearchCV.fit "sklearn.model_selection.HalvingRandomSearchCV.fit")(X[, y, groups]) | Run fit with all sets of parameters. |
| [`get_params`](#sklearn.model_selection.HalvingRandomSearchCV.get_params "sklearn.model_selection.HalvingRandomSearchCV.get_params")([deep]) | Get parameters for this estimator. |
| [`inverse_transform`](#sklearn.model_selection.HalvingRandomSearchCV.inverse_transform "sklearn.model_selection.HalvingRandomSearchCV.inverse_transform")(Xt) | Call inverse\_transform on the estimator with the best found params. |
| [`predict`](#sklearn.model_selection.HalvingRandomSearchCV.predict "sklearn.model_selection.HalvingRandomSearchCV.predict")(X) | Call predict on the estimator with the best found parameters. |
| [`predict_log_proba`](#sklearn.model_selection.HalvingRandomSearchCV.predict_log_proba "sklearn.model_selection.HalvingRandomSearchCV.predict_log_proba")(X) | Call predict\_log\_proba on the estimator with the best found parameters. |
| [`predict_proba`](#sklearn.model_selection.HalvingRandomSearchCV.predict_proba "sklearn.model_selection.HalvingRandomSearchCV.predict_proba")(X) | Call predict\_proba on the estimator with the best found parameters. |
| [`score`](#sklearn.model_selection.HalvingRandomSearchCV.score "sklearn.model_selection.HalvingRandomSearchCV.score")(X[, y]) | Return the score on the given data, if the estimator has been refit. |
| [`score_samples`](#sklearn.model_selection.HalvingRandomSearchCV.score_samples "sklearn.model_selection.HalvingRandomSearchCV.score_samples")(X) | Call score\_samples on the estimator with the best found parameters. |
| [`set_params`](#sklearn.model_selection.HalvingRandomSearchCV.set_params "sklearn.model_selection.HalvingRandomSearchCV.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.model_selection.HalvingRandomSearchCV.transform "sklearn.model_selection.HalvingRandomSearchCV.transform")(X) | Call transform on the estimator with the best found parameters. |
*property*classes\_
Class labels.
Only available when `refit=True` and the estimator is a classifier.
decision\_function(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L548)
Call decision\_function on the estimator with the best found parameters.
Only available if `refit=True` and the underlying estimator supports `decision_function`.
Parameters:
**X**indexable, length n\_samples
Must fulfill the input assumptions of the underlying estimator.
Returns:
**y\_score**ndarray of shape (n\_samples,) or (n\_samples, n\_classes) or (n\_samples, n\_classes \* (n\_classes-1) / 2)
Result of the decision function for `X` based on the estimator with the best found parameters.
fit(*X*, *y=None*, *groups=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search_successive_halving.py#L222)
Run fit with all sets of parameters.
Parameters:
**X**array-like, shape (n\_samples, n\_features)
Training vector, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**array-like, shape (n\_samples,) or (n\_samples, n\_output), optional
Target relative to X for classification or regression; None for unsupervised learning.
**groups**array-like of shape (n\_samples,), default=None
Group labels for the samples used while splitting the dataset into train/test set. Only used in conjunction with a “Group” [cv](https://scikit-learn.org/1.1/glossary.html#term-cv) instance (e.g., [`GroupKFold`](sklearn.model_selection.groupkfold#sklearn.model_selection.GroupKFold "sklearn.model_selection.GroupKFold")).
**\*\*fit\_params**dict of string -> object
Parameters passed to the `fit` method of the estimator.
Returns:
**self**object
Instance of fitted estimator.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
inverse\_transform(*Xt*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L593)
Call inverse\_transform on the estimator with the best found params.
Only available if the underlying estimator implements `inverse_transform` and `refit=True`.
Parameters:
**Xt**indexable, length n\_samples
Must fulfill the input assumptions of the underlying estimator.
Returns:
**X**{ndarray, sparse matrix} of shape (n\_samples, n\_features)
Result of the `inverse_transform` function for `Xt` based on the estimator with the best found parameters.
*property*n\_features\_in\_
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
Only available when `refit=True`.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L480)
Call predict on the estimator with the best found parameters.
Only available if `refit=True` and the underlying estimator supports `predict`.
Parameters:
**X**indexable, length n\_samples
Must fulfill the input assumptions of the underlying estimator.
Returns:
**y\_pred**ndarray of shape (n\_samples,)
The predicted labels or values for `X` based on the estimator with the best found parameters.
predict\_log\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L525)
Call predict\_log\_proba on the estimator with the best found parameters.
Only available if `refit=True` and the underlying estimator supports `predict_log_proba`.
Parameters:
**X**indexable, length n\_samples
Must fulfill the input assumptions of the underlying estimator.
Returns:
**y\_pred**ndarray of shape (n\_samples,) or (n\_samples, n\_classes)
Predicted class log-probabilities for `X` based on the estimator with the best found parameters. The order of the classes corresponds to that in the fitted attribute [classes\_](https://scikit-learn.org/1.1/glossary.html#term-classes_).
predict\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L502)
Call predict\_proba on the estimator with the best found parameters.
Only available if `refit=True` and the underlying estimator supports `predict_proba`.
Parameters:
**X**indexable, length n\_samples
Must fulfill the input assumptions of the underlying estimator.
Returns:
**y\_pred**ndarray of shape (n\_samples,) or (n\_samples, n\_classes)
Predicted class probabilities for `X` based on the estimator with the best found parameters. The order of the classes corresponds to that in the fitted attribute [classes\_](https://scikit-learn.org/1.1/glossary.html#term-classes_).
score(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L413)
Return the score on the given data, if the estimator has been refit.
This uses the score defined by `scoring` where provided, and the `best_estimator_.score` method otherwise.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input data, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**array-like of shape (n\_samples, n\_output) or (n\_samples,), default=None
Target relative to X for classification or regression; None for unsupervised learning.
Returns:
**score**float
The score defined by `scoring` if provided, and the `best_estimator_.score` method otherwise.
score\_samples(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L457)
Call score\_samples on the estimator with the best found parameters.
Only available if `refit=True` and the underlying estimator supports `score_samples`.
New in version 0.24.
Parameters:
**X**iterable
Data to predict on. Must fulfill input requirements of the underlying estimator.
Returns:
**y\_score**ndarray of shape (n\_samples,)
The `best_estimator_.score_samples` method.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L571)
Call transform on the estimator with the best found parameters.
Only available if the underlying estimator supports `transform` and `refit=True`.
Parameters:
**X**indexable, length n\_samples
Must fulfill the input assumptions of the underlying estimator.
Returns:
**Xt**{ndarray, sparse matrix} of shape (n\_samples, n\_features)
`X` transformed in the new space based on the estimator with the best found parameters.
Examples using `sklearn.model_selection.HalvingRandomSearchCV`
--------------------------------------------------------------
[Release Highlights for scikit-learn 0.24](../../auto_examples/release_highlights/plot_release_highlights_0_24_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-0-24-0-py)
[Prediction Intervals for Gradient Boosting Regression](../../auto_examples/ensemble/plot_gradient_boosting_quantile#sphx-glr-auto-examples-ensemble-plot-gradient-boosting-quantile-py)
[Successive Halving Iterations](../../auto_examples/model_selection/plot_successive_halving_iterations#sphx-glr-auto-examples-model-selection-plot-successive-halving-iterations-py)
| programming_docs |
scikit_learn sklearn.gaussian_process.GaussianProcessRegressor sklearn.gaussian\_process.GaussianProcessRegressor
==================================================
*class*sklearn.gaussian\_process.GaussianProcessRegressor(*kernel=None*, *\**, *alpha=1e-10*, *optimizer='fmin\_l\_bfgs\_b'*, *n\_restarts\_optimizer=0*, *normalize\_y=False*, *copy\_X\_train=True*, *random\_state=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/_gpr.py#L24)
Gaussian process regression (GPR).
The implementation is based on Algorithm 2.1 of [[1]](#rf75674b0f418-1).
In addition to standard scikit-learn estimator API, [`GaussianProcessRegressor`](#sklearn.gaussian_process.GaussianProcessRegressor "sklearn.gaussian_process.GaussianProcessRegressor"):
* allows prediction without prior fitting (based on the GP prior)
* provides an additional method `sample_y(X)`, which evaluates samples drawn from the GPR (prior or posterior) at given inputs
* exposes a method `log_marginal_likelihood(theta)`, which can be used externally for other ways of selecting hyperparameters, e.g., via Markov chain Monte Carlo.
Read more in the [User Guide](../gaussian_process#gaussian-process).
New in version 0.18.
Parameters:
**kernel**kernel instance, default=None
The kernel specifying the covariance function of the GP. If None is passed, the kernel `ConstantKernel(1.0, constant_value_bounds="fixed")
* RBF(1.0, length_scale_bounds="fixed")` is used as default. Note that the kernel hyperparameters are optimized during fitting unless the bounds are marked as “fixed”.
**alpha**float or ndarray of shape (n\_samples,), default=1e-10
Value added to the diagonal of the kernel matrix during fitting. This can prevent a potential numerical issue during fitting, by ensuring that the calculated values form a positive definite matrix. It can also be interpreted as the variance of additional Gaussian measurement noise on the training observations. Note that this is different from using a `WhiteKernel`. If an array is passed, it must have the same number of entries as the data used for fitting and is used as datapoint-dependent noise level. Allowing to specify the noise level directly as a parameter is mainly for convenience and for consistency with [`Ridge`](sklearn.linear_model.ridge#sklearn.linear_model.Ridge "sklearn.linear_model.Ridge").
**optimizer**“fmin\_l\_bfgs\_b” or callable, default=”fmin\_l\_bfgs\_b”
Can either be one of the internally supported optimizers for optimizing the kernel’s parameters, specified by a string, or an externally defined optimizer passed as a callable. If a callable is passed, it must have the signature:
```
def optimizer(obj_func, initial_theta, bounds):
# * 'obj_func': the objective function to be minimized, which
# takes the hyperparameters theta as a parameter and an
# optional flag eval_gradient, which determines if the
# gradient is returned additionally to the function value
# * 'initial_theta': the initial value for theta, which can be
# used by local optimizers
# * 'bounds': the bounds on the values of theta
....
# Returned are the best found hyperparameters theta and
# the corresponding value of the target function.
return theta_opt, func_min
```
Per default, the L-BFGS-B algorithm from `scipy.optimize.minimize` is used. If None is passed, the kernel’s parameters are kept fixed. Available internal optimizers are: `{'fmin_l_bfgs_b'}`.
**n\_restarts\_optimizer**int, default=0
The number of restarts of the optimizer for finding the kernel’s parameters which maximize the log-marginal likelihood. The first run of the optimizer is performed from the kernel’s initial parameters, the remaining ones (if any) from thetas sampled log-uniform randomly from the space of allowed theta-values. If greater than 0, all bounds must be finite. Note that `n_restarts_optimizer == 0` implies that one run is performed.
**normalize\_y**bool, default=False
Whether or not to normalize the target values `y` by removing the mean and scaling to unit-variance. This is recommended for cases where zero-mean, unit-variance priors are used. Note that, in this implementation, the normalisation is reversed before the GP predictions are reported.
Changed in version 0.23.
**copy\_X\_train**bool, default=True
If True, a persistent copy of the training data is stored in the object. Otherwise, just a reference to the training data is stored, which might cause predictions to change if the data is modified externally.
**random\_state**int, RandomState instance or None, default=None
Determines random number generation used to initialize the centers. Pass an int for reproducible results across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
Attributes:
**X\_train\_**array-like of shape (n\_samples, n\_features) or list of object
Feature vectors or other representations of training data (also required for prediction).
**y\_train\_**array-like of shape (n\_samples,) or (n\_samples, n\_targets)
Target values in training data (also required for prediction).
**kernel\_**kernel instance
The kernel used for prediction. The structure of the kernel is the same as the one passed as parameter but with optimized hyperparameters.
**L\_**array-like of shape (n\_samples, n\_samples)
Lower-triangular Cholesky decomposition of the kernel in `X_train_`.
**alpha\_**array-like of shape (n\_samples,)
Dual coefficients of training data points in kernel space.
**log\_marginal\_likelihood\_value\_**float
The log-marginal-likelihood of `self.kernel_.theta`.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`GaussianProcessClassifier`](sklearn.gaussian_process.gaussianprocessclassifier#sklearn.gaussian_process.GaussianProcessClassifier "sklearn.gaussian_process.GaussianProcessClassifier")
Gaussian process classification (GPC) based on Laplace approximation.
#### References
[[1](#id1)] [Rasmussen, Carl Edward. “Gaussian processes in machine learning.” Summer school on machine learning. Springer, Berlin, Heidelberg, 2003](http://www.gaussianprocess.org/gpml/chapters/RW.pdf).
#### Examples
```
>>> from sklearn.datasets import make_friedman2
>>> from sklearn.gaussian_process import GaussianProcessRegressor
>>> from sklearn.gaussian_process.kernels import DotProduct, WhiteKernel
>>> X, y = make_friedman2(n_samples=500, noise=0, random_state=0)
>>> kernel = DotProduct() + WhiteKernel()
>>> gpr = GaussianProcessRegressor(kernel=kernel,
... random_state=0).fit(X, y)
>>> gpr.score(X, y)
0.3680...
>>> gpr.predict(X[:2,:], return_std=True)
(array([653.0..., 592.1...]), array([316.6..., 316.6...]))
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.gaussian_process.GaussianProcessRegressor.fit "sklearn.gaussian_process.GaussianProcessRegressor.fit")(X, y) | Fit Gaussian process regression model. |
| [`get_params`](#sklearn.gaussian_process.GaussianProcessRegressor.get_params "sklearn.gaussian_process.GaussianProcessRegressor.get_params")([deep]) | Get parameters for this estimator. |
| [`log_marginal_likelihood`](#sklearn.gaussian_process.GaussianProcessRegressor.log_marginal_likelihood "sklearn.gaussian_process.GaussianProcessRegressor.log_marginal_likelihood")([theta, ...]) | Return log-marginal likelihood of theta for training data. |
| [`predict`](#sklearn.gaussian_process.GaussianProcessRegressor.predict "sklearn.gaussian_process.GaussianProcessRegressor.predict")(X[, return\_std, return\_cov]) | Predict using the Gaussian process regression model. |
| [`sample_y`](#sklearn.gaussian_process.GaussianProcessRegressor.sample_y "sklearn.gaussian_process.GaussianProcessRegressor.sample_y")(X[, n\_samples, random\_state]) | Draw samples from Gaussian process and evaluate at X. |
| [`score`](#sklearn.gaussian_process.GaussianProcessRegressor.score "sklearn.gaussian_process.GaussianProcessRegressor.score")(X, y[, sample\_weight]) | Return the coefficient of determination of the prediction. |
| [`set_params`](#sklearn.gaussian_process.GaussianProcessRegressor.set_params "sklearn.gaussian_process.GaussianProcessRegressor.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/_gpr.py#L195)
Fit Gaussian process regression model.
Parameters:
**X**array-like of shape (n\_samples, n\_features) or list of object
Feature vectors or other representations of training data.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_targets)
Target values.
Returns:
**self**object
GaussianProcessRegressor class instance.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
log\_marginal\_likelihood(*theta=None*, *eval\_gradient=False*, *clone\_kernel=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/_gpr.py#L490)
Return log-marginal likelihood of theta for training data.
Parameters:
**theta**array-like of shape (n\_kernel\_params,) default=None
Kernel hyperparameters for which the log-marginal likelihood is evaluated. If None, the precomputed log\_marginal\_likelihood of `self.kernel_.theta` is returned.
**eval\_gradient**bool, default=False
If True, the gradient of the log-marginal likelihood with respect to the kernel hyperparameters at position theta is returned additionally. If True, theta must not be None.
**clone\_kernel**bool, default=True
If True, the kernel attribute is copied. If False, the kernel attribute is modified, but may result in a performance improvement.
Returns:
**log\_likelihood**float
Log-marginal likelihood of theta for training data.
**log\_likelihood\_gradient**ndarray of shape (n\_kernel\_params,), optional
Gradient of the log-marginal likelihood with respect to the kernel hyperparameters at position theta. Only returned when eval\_gradient is True.
predict(*X*, *return\_std=False*, *return\_cov=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/_gpr.py#L327)
Predict using the Gaussian process regression model.
We can also predict based on an unfitted model by using the GP prior. In addition to the mean of the predictive distribution, optionally also returns its standard deviation (`return_std=True`) or covariance (`return_cov=True`). Note that at most one of the two can be requested.
Parameters:
**X**array-like of shape (n\_samples, n\_features) or list of object
Query points where the GP is evaluated.
**return\_std**bool, default=False
If True, the standard-deviation of the predictive distribution at the query points is returned along with the mean.
**return\_cov**bool, default=False
If True, the covariance of the joint predictive distribution at the query points is returned along with the mean.
Returns:
**y\_mean**ndarray of shape (n\_samples,) or (n\_samples, n\_targets)
Mean of predictive distribution a query points.
**y\_std**ndarray of shape (n\_samples,) or (n\_samples, n\_targets), optional
Standard deviation of predictive distribution at query points. Only returned when `return_std` is True.
**y\_cov**ndarray of shape (n\_samples, n\_samples) or (n\_samples, n\_samples, n\_targets), optional
Covariance of joint predictive distribution a query points. Only returned when `return_cov` is True.
sample\_y(*X*, *n\_samples=1*, *random\_state=0*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/_gpr.py#L451)
Draw samples from Gaussian process and evaluate at X.
Parameters:
**X**array-like of shape (n\_samples\_X, n\_features) or list of object
Query points where the GP is evaluated.
**n\_samples**int, default=1
Number of samples drawn from the Gaussian process per query point.
**random\_state**int, RandomState instance or None, default=0
Determines random number generation to randomly draw samples. Pass an int for reproducible results across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
Returns:
**y\_samples**ndarray of shape (n\_samples\_X, n\_samples), or (n\_samples\_X, n\_targets, n\_samples)
Values of n\_samples samples drawn from Gaussian process and evaluated at query points.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L677)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares `((y_true - y_pred)** 2).sum()` and \(v\) is the total sum of squares `((y_true - y_true.mean()) ** 2).sum()`. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of `y`, disregarding the input features, would get a \(R^2\) score of 0.0.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape `(n_samples, n_samples_fitted)`, where `n_samples_fitted` is the number of samples used in the fitting for the estimator.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True values for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
\(R^2\) of `self.predict(X)` wrt. `y`.
#### Notes
The \(R^2\) score used when calling `score` on a regressor uses `multioutput='uniform_average'` from version 0.23 to keep consistent with default value of [`r2_score`](sklearn.metrics.r2_score#sklearn.metrics.r2_score "sklearn.metrics.r2_score"). This influences the `score` method of all the multioutput regressors (except for [`MultiOutputRegressor`](sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor")).
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.gaussian_process.GaussianProcessRegressor`
------------------------------------------------------------------
[Comparison of kernel ridge and Gaussian process regression](../../auto_examples/gaussian_process/plot_compare_gpr_krr#sphx-glr-auto-examples-gaussian-process-plot-compare-gpr-krr-py)
[Gaussian Processes regression: basic introductory example](../../auto_examples/gaussian_process/plot_gpr_noisy_targets#sphx-glr-auto-examples-gaussian-process-plot-gpr-noisy-targets-py)
[Gaussian process regression (GPR) on Mauna Loa CO2 data](../../auto_examples/gaussian_process/plot_gpr_co2#sphx-glr-auto-examples-gaussian-process-plot-gpr-co2-py)
[Gaussian process regression (GPR) with noise-level estimation](../../auto_examples/gaussian_process/plot_gpr_noisy#sphx-glr-auto-examples-gaussian-process-plot-gpr-noisy-py)
[Gaussian processes on discrete data structures](../../auto_examples/gaussian_process/plot_gpr_on_structured_data#sphx-glr-auto-examples-gaussian-process-plot-gpr-on-structured-data-py)
[Illustration of prior and posterior Gaussian process for different kernels](../../auto_examples/gaussian_process/plot_gpr_prior_posterior#sphx-glr-auto-examples-gaussian-process-plot-gpr-prior-posterior-py)
scikit_learn sklearn.naive_bayes.MultinomialNB sklearn.naive\_bayes.MultinomialNB
==================================
*class*sklearn.naive\_bayes.MultinomialNB(*\**, *alpha=1.0*, *fit\_prior=True*, *class\_prior=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/naive_bayes.py#L750)
Naive Bayes classifier for multinomial models.
The multinomial Naive Bayes classifier is suitable for classification with discrete features (e.g., word counts for text classification). The multinomial distribution normally requires integer feature counts. However, in practice, fractional counts such as tf-idf may also work.
Read more in the [User Guide](../naive_bayes#multinomial-naive-bayes).
Parameters:
**alpha**float, default=1.0
Additive (Laplace/Lidstone) smoothing parameter (0 for no smoothing).
**fit\_prior**bool, default=True
Whether to learn class prior probabilities or not. If false, a uniform prior will be used.
**class\_prior**array-like of shape (n\_classes,), default=None
Prior probabilities of the classes. If specified, the priors are not adjusted according to the data.
Attributes:
**class\_count\_**ndarray of shape (n\_classes,)
Number of samples encountered for each class during fitting. This value is weighted by the sample weight when provided.
**class\_log\_prior\_**ndarray of shape (n\_classes,)
Smoothed empirical log probability for each class.
**classes\_**ndarray of shape (n\_classes,)
Class labels known to the classifier
**feature\_count\_**ndarray of shape (n\_classes, n\_features)
Number of samples encountered for each (class, feature) during fitting. This value is weighted by the sample weight when provided.
**feature\_log\_prob\_**ndarray of shape (n\_classes, n\_features)
Empirical log probability of features given a class, `P(x_i|y)`.
[`n_features_`](#sklearn.naive_bayes.MultinomialNB.n_features_ "sklearn.naive_bayes.MultinomialNB.n_features_")int
DEPRECATED: Attribute `n_features_` was deprecated in version 1.0 and will be removed in 1.2.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`BernoulliNB`](sklearn.naive_bayes.bernoullinb#sklearn.naive_bayes.BernoulliNB "sklearn.naive_bayes.BernoulliNB")
Naive Bayes classifier for multivariate Bernoulli models.
[`CategoricalNB`](sklearn.naive_bayes.categoricalnb#sklearn.naive_bayes.CategoricalNB "sklearn.naive_bayes.CategoricalNB")
Naive Bayes classifier for categorical features.
[`ComplementNB`](sklearn.naive_bayes.complementnb#sklearn.naive_bayes.ComplementNB "sklearn.naive_bayes.ComplementNB")
Complement Naive Bayes classifier.
[`GaussianNB`](sklearn.naive_bayes.gaussiannb#sklearn.naive_bayes.GaussianNB "sklearn.naive_bayes.GaussianNB")
Gaussian Naive Bayes.
#### References
C.D. Manning, P. Raghavan and H. Schuetze (2008). Introduction to Information Retrieval. Cambridge University Press, pp. 234-265. <https://nlp.stanford.edu/IR-book/html/htmledition/naive-bayes-text-classification-1.html>
#### Examples
```
>>> import numpy as np
>>> rng = np.random.RandomState(1)
>>> X = rng.randint(5, size=(6, 100))
>>> y = np.array([1, 2, 3, 4, 5, 6])
>>> from sklearn.naive_bayes import MultinomialNB
>>> clf = MultinomialNB()
>>> clf.fit(X, y)
MultinomialNB()
>>> print(clf.predict(X[2:3]))
[3]
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.naive_bayes.MultinomialNB.fit "sklearn.naive_bayes.MultinomialNB.fit")(X, y[, sample\_weight]) | Fit Naive Bayes classifier according to X, y. |
| [`get_params`](#sklearn.naive_bayes.MultinomialNB.get_params "sklearn.naive_bayes.MultinomialNB.get_params")([deep]) | Get parameters for this estimator. |
| [`partial_fit`](#sklearn.naive_bayes.MultinomialNB.partial_fit "sklearn.naive_bayes.MultinomialNB.partial_fit")(X, y[, classes, sample\_weight]) | Incremental fit on a batch of samples. |
| [`predict`](#sklearn.naive_bayes.MultinomialNB.predict "sklearn.naive_bayes.MultinomialNB.predict")(X) | Perform classification on an array of test vectors X. |
| [`predict_log_proba`](#sklearn.naive_bayes.MultinomialNB.predict_log_proba "sklearn.naive_bayes.MultinomialNB.predict_log_proba")(X) | Return log-probability estimates for the test vector X. |
| [`predict_proba`](#sklearn.naive_bayes.MultinomialNB.predict_proba "sklearn.naive_bayes.MultinomialNB.predict_proba")(X) | Return probability estimates for the test vector X. |
| [`score`](#sklearn.naive_bayes.MultinomialNB.score "sklearn.naive_bayes.MultinomialNB.score")(X, y[, sample\_weight]) | Return the mean accuracy on the given test data and labels. |
| [`set_params`](#sklearn.naive_bayes.MultinomialNB.set_params "sklearn.naive_bayes.MultinomialNB.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/naive_bayes.py#L679)
Fit Naive Bayes classifier according to X, y.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Training vectors, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**array-like of shape (n\_samples,)
Target values.
**sample\_weight**array-like of shape (n\_samples,), default=None
Weights applied to individual samples (1. for unweighted).
Returns:
**self**object
Returns the instance itself.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
*property*n\_features\_
DEPRECATED: Attribute `n_features_` was deprecated in version 1.0 and will be removed in 1.2. Use `n_features_in_` instead.
partial\_fit(*X*, *y*, *classes=None*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/naive_bayes.py#L598)
Incremental fit on a batch of samples.
This method is expected to be called several times consecutively on different chunks of a dataset so as to implement out-of-core or online learning.
This is especially useful when the whole dataset is too big to fit in memory at once.
This method has some performance overhead hence it is better to call partial\_fit on chunks of data that are as large as possible (as long as fitting in the memory budget) to hide the overhead.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Training vectors, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**array-like of shape (n\_samples,)
Target values.
**classes**array-like of shape (n\_classes,), default=None
List of all the classes that can possibly appear in the y vector.
Must be provided at the first call to partial\_fit, can be omitted in subsequent calls.
**sample\_weight**array-like of shape (n\_samples,), default=None
Weights applied to individual samples (1. for unweighted).
Returns:
**self**object
Returns the instance itself.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/naive_bayes.py#L65)
Perform classification on an array of test vectors X.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The input samples.
Returns:
**C**ndarray of shape (n\_samples,)
Predicted target values for X.
predict\_log\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/naive_bayes.py#L84)
Return log-probability estimates for the test vector X.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The input samples.
Returns:
**C**array-like of shape (n\_samples, n\_classes)
Returns the log-probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute [classes\_](https://scikit-learn.org/1.1/glossary.html#term-classes_).
predict\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/naive_bayes.py#L107)
Return probability estimates for the test vector X.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The input samples.
Returns:
**C**array-like of shape (n\_samples, n\_classes)
Returns the probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute [classes\_](https://scikit-learn.org/1.1/glossary.html#term-classes_).
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L640)
Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True labels for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
Mean accuracy of `self.predict(X)` wrt. `y`.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.naive_bayes.MultinomialNB`
--------------------------------------------------
[Out-of-core classification of text documents](../../auto_examples/applications/plot_out_of_core_classification#sphx-glr-auto-examples-applications-plot-out-of-core-classification-py)
| programming_docs |
scikit_learn sklearn.linear_model.TheilSenRegressor sklearn.linear\_model.TheilSenRegressor
=======================================
*class*sklearn.linear\_model.TheilSenRegressor(*\**, *fit\_intercept=True*, *copy\_X=True*, *max\_subpopulation=10000.0*, *n\_subsamples=None*, *max\_iter=300*, *tol=0.001*, *random\_state=None*, *n\_jobs=None*, *verbose=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_theil_sen.py#L208)
Theil-Sen Estimator: robust multivariate regression model.
The algorithm calculates least square solutions on subsets with size n\_subsamples of the samples in X. Any value of n\_subsamples between the number of features and samples leads to an estimator with a compromise between robustness and efficiency. Since the number of least square solutions is “n\_samples choose n\_subsamples”, it can be extremely large and can therefore be limited with max\_subpopulation. If this limit is reached, the subsets are chosen randomly. In a final step, the spatial median (or L1 median) is calculated of all least square solutions.
Read more in the [User Guide](../linear_model#theil-sen-regression).
Parameters:
**fit\_intercept**bool, default=True
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations.
**copy\_X**bool, default=True
If True, X will be copied; else, it may be overwritten.
**max\_subpopulation**int, default=1e4
Instead of computing with a set of cardinality ‘n choose k’, where n is the number of samples and k is the number of subsamples (at least number of features), consider only a stochastic subpopulation of a given maximal size if ‘n choose k’ is larger than max\_subpopulation. For other than small problem sizes this parameter will determine memory usage and runtime if n\_subsamples is not changed. Note that the data type should be int but floats such as 1e4 can be accepted too.
**n\_subsamples**int, default=None
Number of samples to calculate the parameters. This is at least the number of features (plus 1 if fit\_intercept=True) and the number of samples as a maximum. A lower number leads to a higher breakdown point and a low efficiency while a high number leads to a low breakdown point and a high efficiency. If None, take the minimum number of subsamples leading to maximal robustness. If n\_subsamples is set to n\_samples, Theil-Sen is identical to least squares.
**max\_iter**int, default=300
Maximum number of iterations for the calculation of spatial median.
**tol**float, default=1e-3
Tolerance when calculating spatial median.
**random\_state**int, RandomState instance or None, default=None
A random number generator instance to define the state of the random permutations generator. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**n\_jobs**int, default=None
Number of CPUs to use during the cross validation. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
**verbose**bool, default=False
Verbose mode when fitting the model.
Attributes:
**coef\_**ndarray of shape (n\_features,)
Coefficients of the regression model (median of distribution).
**intercept\_**float
Estimated intercept of regression model.
**breakdown\_**float
Approximated breakdown point.
**n\_iter\_**int
Number of iterations needed for the spatial median.
**n\_subpopulation\_**int
Number of combinations taken into account from ‘n choose k’, where n is the number of samples and k is the number of subsamples.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`HuberRegressor`](sklearn.linear_model.huberregressor#sklearn.linear_model.HuberRegressor "sklearn.linear_model.HuberRegressor")
Linear regression model that is robust to outliers.
[`RANSACRegressor`](sklearn.linear_model.ransacregressor#sklearn.linear_model.RANSACRegressor "sklearn.linear_model.RANSACRegressor")
RANSAC (RANdom SAmple Consensus) algorithm.
[`SGDRegressor`](sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor "sklearn.linear_model.SGDRegressor")
Fitted by minimizing a regularized empirical loss with SGD.
#### References
* Theil-Sen Estimators in a Multiple Linear Regression Model, 2009 Xin Dang, Hanxiang Peng, Xueqin Wang and Heping Zhang <http://home.olemiss.edu/~xdang/papers/MTSE.pdf>
#### Examples
```
>>> from sklearn.linear_model import TheilSenRegressor
>>> from sklearn.datasets import make_regression
>>> X, y = make_regression(
... n_samples=200, n_features=2, noise=4.0, random_state=0)
>>> reg = TheilSenRegressor(random_state=0).fit(X, y)
>>> reg.score(X, y)
0.9884...
>>> reg.predict(X[:1,])
array([-31.5871...])
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.linear_model.TheilSenRegressor.fit "sklearn.linear_model.TheilSenRegressor.fit")(X, y) | Fit linear model. |
| [`get_params`](#sklearn.linear_model.TheilSenRegressor.get_params "sklearn.linear_model.TheilSenRegressor.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.linear_model.TheilSenRegressor.predict "sklearn.linear_model.TheilSenRegressor.predict")(X) | Predict using the linear model. |
| [`score`](#sklearn.linear_model.TheilSenRegressor.score "sklearn.linear_model.TheilSenRegressor.score")(X, y[, sample\_weight]) | Return the coefficient of determination of the prediction. |
| [`set_params`](#sklearn.linear_model.TheilSenRegressor.set_params "sklearn.linear_model.TheilSenRegressor.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_theil_sen.py#L393)
Fit linear model.
Parameters:
**X**ndarray of shape (n\_samples, n\_features)
Training data.
**y**ndarray of shape (n\_samples,)
Target values.
Returns:
**self**returns an instance of self.
Fitted `TheilSenRegressor` estimator.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L372)
Predict using the linear model.
Parameters:
**X**array-like or sparse matrix, shape (n\_samples, n\_features)
Samples.
Returns:
**C**array, shape (n\_samples,)
Returns predicted values.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L677)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares `((y_true - y_pred)** 2).sum()` and \(v\) is the total sum of squares `((y_true - y_true.mean()) ** 2).sum()`. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of `y`, disregarding the input features, would get a \(R^2\) score of 0.0.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape `(n_samples, n_samples_fitted)`, where `n_samples_fitted` is the number of samples used in the fitting for the estimator.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True values for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
\(R^2\) of `self.predict(X)` wrt. `y`.
#### Notes
The \(R^2\) score used when calling `score` on a regressor uses `multioutput='uniform_average'` from version 0.23 to keep consistent with default value of [`r2_score`](sklearn.metrics.r2_score#sklearn.metrics.r2_score "sklearn.metrics.r2_score"). This influences the `score` method of all the multioutput regressors (except for [`MultiOutputRegressor`](sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor")).
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.linear_model.TheilSenRegressor`
-------------------------------------------------------
[Robust linear estimator fitting](../../auto_examples/linear_model/plot_robust_fit#sphx-glr-auto-examples-linear-model-plot-robust-fit-py)
[Theil-Sen Regression](../../auto_examples/linear_model/plot_theilsen#sphx-glr-auto-examples-linear-model-plot-theilsen-py)
scikit_learn sklearn.ensemble.StackingRegressor sklearn.ensemble.StackingRegressor
==================================
*class*sklearn.ensemble.StackingRegressor(*estimators*, *final\_estimator=None*, *\**, *cv=None*, *n\_jobs=None*, *passthrough=False*, *verbose=0*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/ensemble/_stacking.py#L675)
Stack of estimators with a final regressor.
Stacked generalization consists in stacking the output of individual estimator and use a regressor to compute the final prediction. Stacking allows to use the strength of each individual estimator by using their output as input of a final estimator.
Note that `estimators_` are fitted on the full `X` while `final_estimator_` is trained using cross-validated predictions of the base estimators using `cross_val_predict`.
Read more in the [User Guide](../ensemble#stacking).
New in version 0.22.
Parameters:
**estimators**list of (str, estimator)
Base estimators which will be stacked together. Each element of the list is defined as a tuple of string (i.e. name) and an estimator instance. An estimator can be set to ‘drop’ using `set_params`.
**final\_estimator**estimator, default=None
A regressor which will be used to combine the base estimators. The default regressor is a [`RidgeCV`](sklearn.linear_model.ridgecv#sklearn.linear_model.RidgeCV "sklearn.linear_model.RidgeCV").
**cv**int, cross-validation generator, iterable, or “prefit”, default=None
Determines the cross-validation splitting strategy used in `cross_val_predict` to train `final_estimator`. Possible inputs for cv are:
* None, to use the default 5-fold cross validation,
* integer, to specify the number of folds in a (Stratified) KFold,
* An object to be used as a cross-validation generator,
* An iterable yielding train, test splits.
* “prefit” to assume the `estimators` are prefit, and skip cross validation
For integer/None inputs, if the estimator is a classifier and y is either binary or multiclass, [`StratifiedKFold`](sklearn.model_selection.stratifiedkfold#sklearn.model_selection.StratifiedKFold "sklearn.model_selection.StratifiedKFold") is used. In all other cases, [`KFold`](sklearn.model_selection.kfold#sklearn.model_selection.KFold "sklearn.model_selection.KFold") is used. These splitters are instantiated with `shuffle=False` so the splits will be the same across calls.
Refer [User Guide](../cross_validation#cross-validation) for the various cross-validation strategies that can be used here.
If “prefit” is passed, it is assumed that all `estimators` have been fitted already. The `final_estimator_` is trained on the `estimators` predictions on the full training set and are **not** cross validated predictions. Please note that if the models have been trained on the same data to train the stacking model, there is a very high risk of overfitting.
New in version 1.1: The ‘prefit’ option was added in 1.1
Note
A larger number of split will provide no benefits if the number of training samples is large enough. Indeed, the training time will increase. `cv` is not used for model evaluation but for prediction.
**n\_jobs**int, default=None
The number of jobs to run in parallel for `fit` of all `estimators`. `None` means 1 unless in a `joblib.parallel_backend` context. -1 means using all processors. See Glossary for more details.
**passthrough**bool, default=False
When False, only the predictions of estimators will be used as training data for `final_estimator`. When True, the `final_estimator` is trained on the predictions as well as the original training data.
**verbose**int, default=0
Verbosity level.
Attributes:
**estimators\_**list of estimator
The elements of the `estimators` parameter, having been fitted on the training data. If an estimator has been set to `'drop'`, it will not appear in `estimators_`. When `cv="prefit"`, `estimators_` is set to `estimators` and is not fitted again.
**named\_estimators\_**[`Bunch`](sklearn.utils.bunch#sklearn.utils.Bunch "sklearn.utils.Bunch")
Attribute to access any fitted sub-estimators by name.
[`n_features_in_`](#sklearn.ensemble.StackingRegressor.n_features_in_ "sklearn.ensemble.StackingRegressor.n_features_in_")int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Only defined if the underlying estimators expose such an attribute when fit. .. versionadded:: 1.0
**final\_estimator\_**estimator
The regressor to stacked the base estimators fitted.
**stack\_method\_**list of str
The method used by each base estimator.
See also
[`StackingClassifier`](sklearn.ensemble.stackingclassifier#sklearn.ensemble.StackingClassifier "sklearn.ensemble.StackingClassifier")
Stack of estimators with a final classifier.
#### References
[1] Wolpert, David H. “Stacked generalization.” Neural networks 5.2 (1992): 241-259.
#### Examples
```
>>> from sklearn.datasets import load_diabetes
>>> from sklearn.linear_model import RidgeCV
>>> from sklearn.svm import LinearSVR
>>> from sklearn.ensemble import RandomForestRegressor
>>> from sklearn.ensemble import StackingRegressor
>>> X, y = load_diabetes(return_X_y=True)
>>> estimators = [
... ('lr', RidgeCV()),
... ('svr', LinearSVR(random_state=42))
... ]
>>> reg = StackingRegressor(
... estimators=estimators,
... final_estimator=RandomForestRegressor(n_estimators=10,
... random_state=42)
... )
>>> from sklearn.model_selection import train_test_split
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, random_state=42
... )
>>> reg.fit(X_train, y_train).score(X_test, y_test)
0.3...
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.ensemble.StackingRegressor.fit "sklearn.ensemble.StackingRegressor.fit")(X, y[, sample\_weight]) | Fit the estimators. |
| [`fit_transform`](#sklearn.ensemble.StackingRegressor.fit_transform "sklearn.ensemble.StackingRegressor.fit_transform")(X[, y]) | Fit to data, then transform it. |
| [`get_feature_names_out`](#sklearn.ensemble.StackingRegressor.get_feature_names_out "sklearn.ensemble.StackingRegressor.get_feature_names_out")([input\_features]) | Get output feature names for transformation. |
| [`get_params`](#sklearn.ensemble.StackingRegressor.get_params "sklearn.ensemble.StackingRegressor.get_params")([deep]) | Get the parameters of an estimator from the ensemble. |
| [`predict`](#sklearn.ensemble.StackingRegressor.predict "sklearn.ensemble.StackingRegressor.predict")(X, \*\*predict\_params) | Predict target for X. |
| [`score`](#sklearn.ensemble.StackingRegressor.score "sklearn.ensemble.StackingRegressor.score")(X, y[, sample\_weight]) | Return the coefficient of determination of the prediction. |
| [`set_params`](#sklearn.ensemble.StackingRegressor.set_params "sklearn.ensemble.StackingRegressor.set_params")(\*\*params) | Set the parameters of an estimator from the ensemble. |
| [`transform`](#sklearn.ensemble.StackingRegressor.transform "sklearn.ensemble.StackingRegressor.transform")(X) | Return the predictions for X for each estimator. |
fit(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/ensemble/_stacking.py#L843)
Fit the estimators.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Training vectors, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**array-like of shape (n\_samples,)
Target values.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights. If None, then samples are equally weighted. Note that this is supported only if all underlying estimators support sample weights.
Returns:
**self**object
Returns a fitted instance.
fit\_transform(*X*, *y=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L839)
Fit to data, then transform it.
Fits transformer to `X` and `y` with optional parameters `fit_params` and returns a transformed version of `X`.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs), default=None
Target values (None for unsupervised transformations).
**\*\*fit\_params**dict
Additional fit parameters.
Returns:
**X\_new**ndarray array of shape (n\_samples, n\_features\_new)
Transformed array.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/ensemble/_stacking.py#L283)
Get output feature names for transformation.
Parameters:
**input\_features**array-like of str or None, default=None
Input features. The input feature names are only used when `passthrough` is `True`.
* If `input_features` is `None`, then `feature_names_in_` is used as feature names in. If `feature_names_in_` is not defined, then names are generated: `[x0, x1, ..., x(n_features_in_ - 1)]`.
* If `input_features` is an array-like, then `input_features` must match `feature_names_in_` if `feature_names_in_` is defined.
If `passthrough` is `False`, then only the names of `estimators` are used to generate the output feature names.
Returns:
**feature\_names\_out**ndarray of str objects
Transformed feature names.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/ensemble/_base.py#L310)
Get the parameters of an estimator from the ensemble.
Returns the parameters given in the constructor as well as the estimators contained within the `estimators` parameter.
Parameters:
**deep**bool, default=True
Setting it to True gets the various estimators and the parameters of the estimators as well.
Returns:
**params**dict
Parameter and estimator names mapped to their values or parameter names mapped to their values.
*property*n\_features\_in\_
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
*property*named\_estimators
Dictionary to access any fitted sub-estimators by name.
Returns:
[`Bunch`](sklearn.utils.bunch#sklearn.utils.Bunch "sklearn.utils.Bunch")
predict(*X*, *\*\*predict\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/ensemble/_stacking.py#L328)
Predict target for X.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Training vectors, where `n_samples` is the number of samples and `n_features` is the number of features.
**\*\*predict\_params**dict of str -> obj
Parameters to the `predict` called by the `final_estimator`. Note that this may be used to return uncertainties from some estimators with `return_std` or `return_cov`. Be aware that it will only accounts for uncertainty in the final estimator.
Returns:
**y\_pred**ndarray of shape (n\_samples,) or (n\_samples, n\_output)
Predicted targets.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L677)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares `((y_true - y_pred)** 2).sum()` and \(v\) is the total sum of squares `((y_true - y_true.mean()) ** 2).sum()`. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of `y`, disregarding the input features, would get a \(R^2\) score of 0.0.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape `(n_samples, n_samples_fitted)`, where `n_samples_fitted` is the number of samples used in the fitting for the estimator.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True values for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
\(R^2\) of `self.predict(X)` wrt. `y`.
#### Notes
The \(R^2\) score used when calling `score` on a regressor uses `multioutput='uniform_average'` from version 0.23 to keep consistent with default value of [`r2_score`](sklearn.metrics.r2_score#sklearn.metrics.r2_score "sklearn.metrics.r2_score"). This influences the `score` method of all the multioutput regressors (except for [`MultiOutputRegressor`](sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor")).
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/ensemble/_base.py#L285)
Set the parameters of an estimator from the ensemble.
Valid parameter keys can be listed with `get_params()`. Note that you can directly set the parameters of the estimators contained in `estimators`.
Parameters:
**\*\*params**keyword arguments
Specific parameters using e.g. `set_params(parameter_name=new_value)`. In addition, to setting the parameters of the estimator, the individual estimator of the estimators can also be set, or can be removed by setting them to ‘drop’.
Returns:
**self**object
Estimator instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/ensemble/_stacking.py#L868)
Return the predictions for X for each estimator.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Training vectors, where `n_samples` is the number of samples and `n_features` is the number of features.
Returns:
**y\_preds**ndarray of shape (n\_samples, n\_estimators)
Prediction outputs for each estimator.
Examples using `sklearn.ensemble.StackingRegressor`
---------------------------------------------------
[Combine predictors using stacking](../../auto_examples/ensemble/plot_stack_predictors#sphx-glr-auto-examples-ensemble-plot-stack-predictors-py)
| programming_docs |
scikit_learn sklearn.utils.validation.column_or_1d sklearn.utils.validation.column\_or\_1d
=======================================
sklearn.utils.validation.column\_or\_1d(*y*, *\**, *warn=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/utils/validation.py#L1120)
Ravel column or 1d numpy array, else raises an error.
Parameters:
**y**array-like
Input data.
**warn**bool, default=False
To control display of warnings.
Returns:
**y**ndarray
Output data.
Raises:
ValueError
If `y` is not a 1D array or a 2D array with a single row or column.
scikit_learn sklearn.manifold.locally_linear_embedding sklearn.manifold.locally\_linear\_embedding
===========================================
sklearn.manifold.locally\_linear\_embedding(*X*, *\**, *n\_neighbors*, *n\_components*, *reg=0.001*, *eigen\_solver='auto'*, *tol=1e-06*, *max\_iter=100*, *method='standard'*, *hessian\_tol=0.0001*, *modified\_tol=1e-12*, *random\_state=None*, *n\_jobs=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/manifold/_locally_linear.py#L198)
Perform a Locally Linear Embedding analysis on the data.
Read more in the [User Guide](../manifold#locally-linear-embedding).
Parameters:
**X**{array-like, NearestNeighbors}
Sample data, shape = (n\_samples, n\_features), in the form of a numpy array or a NearestNeighbors object.
**n\_neighbors**int
number of neighbors to consider for each point.
**n\_components**int
number of coordinates for the manifold.
**reg**float, default=1e-3
regularization constant, multiplies the trace of the local covariance matrix of the distances.
**eigen\_solver**{‘auto’, ‘arpack’, ‘dense’}, default=’auto’
auto : algorithm will attempt to choose the best method for input data
arpackuse arnoldi iteration in shift-invert mode.
For this method, M may be a dense matrix, sparse matrix, or general linear operator. Warning: ARPACK can be unstable for some problems. It is best to try several random seeds in order to check results.
denseuse standard dense matrix operations for the eigenvalue
decomposition. For this method, M must be an array or matrix type. This method should be avoided for large problems.
**tol**float, default=1e-6
Tolerance for ‘arpack’ method Not used if eigen\_solver==’dense’.
**max\_iter**int, default=100
maximum number of iterations for the arpack solver.
**method**{‘standard’, ‘hessian’, ‘modified’, ‘ltsa’}, default=’standard’
standarduse the standard locally linear embedding algorithm.
see reference [[1]](#rb2a5641379f7-1)
hessianuse the Hessian eigenmap method. This method requires
n\_neighbors > n\_components \* (1 + (n\_components + 1) / 2. see reference [[2]](#rb2a5641379f7-2)
modifieduse the modified locally linear embedding algorithm.
see reference [[3]](#rb2a5641379f7-3)
ltsause local tangent space alignment algorithm
see reference [[4]](#rb2a5641379f7-4)
**hessian\_tol**float, default=1e-4
Tolerance for Hessian eigenmapping method. Only used if method == ‘hessian’
**modified\_tol**float, default=1e-12
Tolerance for modified LLE method. Only used if method == ‘modified’
**random\_state**int, RandomState instance, default=None
Determines the random number generator when `solver` == ‘arpack’. Pass an int for reproducible results across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**n\_jobs**int or None, default=None
The number of parallel jobs to run for neighbors search. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
Returns:
**Y**array-like, shape [n\_samples, n\_components]
Embedding vectors.
**squared\_error**float
Reconstruction error for the embedding vectors. Equivalent to `norm(Y - W Y, 'fro')**2`, where W are the reconstruction weights.
#### References
[[1](#id1)] Roweis, S. & Saul, L. Nonlinear dimensionality reduction by locally linear embedding. Science 290:2323 (2000).
[[2](#id2)] Donoho, D. & Grimes, C. Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data. Proc Natl Acad Sci U S A. 100:5591 (2003).
[[3](#id3)] Zhang, Z. & Wang, J. MLLE: Modified Locally Linear Embedding Using Multiple Weights. <http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.70.382>
[[4](#id4)] Zhang, Z. & Zha, H. Principal manifolds and nonlinear dimensionality reduction via tangent space alignment. Journal of Shanghai Univ. 8:406 (2004)
Examples using `sklearn.manifold.locally_linear_embedding`
----------------------------------------------------------
[Swiss Roll And Swiss-Hole Reduction](../../auto_examples/manifold/plot_swissroll#sphx-glr-auto-examples-manifold-plot-swissroll-py)
scikit_learn sklearn.exceptions.UndefinedMetricWarning sklearn.exceptions.UndefinedMetricWarning
=========================================
*class*sklearn.exceptions.UndefinedMetricWarning[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/exceptions.py#L113)
Warning used when the metric is invalid
Changed in version 0.18: Moved from sklearn.base.
Attributes:
**args**
#### Methods
| | |
| --- | --- |
| [`with_traceback`](#sklearn.exceptions.UndefinedMetricWarning.with_traceback "sklearn.exceptions.UndefinedMetricWarning.with_traceback") | Exception.with\_traceback(tb) -- set self.\_\_traceback\_\_ to tb and return self. |
with\_traceback()
Exception.with\_traceback(tb) – set self.\_\_traceback\_\_ to tb and return self.
scikit_learn sklearn.model_selection.cross_val_predict sklearn.model\_selection.cross\_val\_predict
============================================
sklearn.model\_selection.cross\_val\_predict(*estimator*, *X*, *y=None*, *\**, *groups=None*, *cv=None*, *n\_jobs=None*, *verbose=0*, *fit\_params=None*, *pre\_dispatch='2\*n\_jobs'*, *method='predict'*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_validation.py#L803)
Generate cross-validated estimates for each input data point.
The data is split according to the cv parameter. Each sample belongs to exactly one test set, and its prediction is computed with an estimator fitted on the corresponding training set.
Passing these predictions into an evaluation metric may not be a valid way to measure generalization performance. Results can differ from [`cross_validate`](sklearn.model_selection.cross_validate#sklearn.model_selection.cross_validate "sklearn.model_selection.cross_validate") and [`cross_val_score`](sklearn.model_selection.cross_val_score#sklearn.model_selection.cross_val_score "sklearn.model_selection.cross_val_score") unless all tests sets have equal size and the metric decomposes over samples.
Read more in the [User Guide](../cross_validation#cross-validation).
Parameters:
**estimator**estimator object implementing ‘fit’ and ‘predict’
The object to use to fit the data.
**X**array-like of shape (n\_samples, n\_features)
The data to fit. Can be, for example a list, or an array at least 2d.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs), default=None
The target variable to try to predict in the case of supervised learning.
**groups**array-like of shape (n\_samples,), default=None
Group labels for the samples used while splitting the dataset into train/test set. Only used in conjunction with a “Group” [cv](https://scikit-learn.org/1.1/glossary.html#term-cv) instance (e.g., [`GroupKFold`](sklearn.model_selection.groupkfold#sklearn.model_selection.GroupKFold "sklearn.model_selection.GroupKFold")).
**cv**int, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are:
* None, to use the default 5-fold cross validation,
* int, to specify the number of folds in a `(Stratified)KFold`,
* [CV splitter](https://scikit-learn.org/1.1/glossary.html#term-CV-splitter),
* An iterable that generates (train, test) splits as arrays of indices.
For int/None inputs, if the estimator is a classifier and `y` is either binary or multiclass, [`StratifiedKFold`](sklearn.model_selection.stratifiedkfold#sklearn.model_selection.StratifiedKFold "sklearn.model_selection.StratifiedKFold") is used. In all other cases, [`KFold`](sklearn.model_selection.kfold#sklearn.model_selection.KFold "sklearn.model_selection.KFold") is used. These splitters are instantiated with `shuffle=False` so the splits will be the same across calls.
Refer [User Guide](../cross_validation#cross-validation) for the various cross-validation strategies that can be used here.
Changed in version 0.22: `cv` default value if None changed from 3-fold to 5-fold.
**n\_jobs**int, default=None
Number of jobs to run in parallel. Training the estimator and predicting are parallelized over the cross-validation splits. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
**verbose**int, default=0
The verbosity level.
**fit\_params**dict, default=None
Parameters to pass to the fit method of the estimator.
**pre\_dispatch**int or str, default=’2\*n\_jobs’
Controls the number of jobs that get dispatched during parallel execution. Reducing this number can be useful to avoid an explosion of memory consumption when more jobs get dispatched than CPUs can process. This parameter can be:
* None, in which case all the jobs are immediately created and spawned. Use this for lightweight and fast-running jobs, to avoid delays due to on-demand spawning of the jobs
* An int, giving the exact number of total jobs that are spawned
* A str, giving an expression as a function of n\_jobs, as in ‘2\*n\_jobs’
**method**{‘predict’, ‘predict\_proba’, ‘predict\_log\_proba’, ‘decision\_function’}, default=’predict’
The method to be invoked by `estimator`.
Returns:
**predictions**ndarray
This is the result of calling `method`. Shape:
* When `method` is ‘predict’ and in special case where `method` is ‘decision\_function’ and the target is binary: (n\_samples,)
* When `method` is one of {‘predict\_proba’, ‘predict\_log\_proba’, ‘decision\_function’} (unless special case above): (n\_samples, n\_classes)
* If `estimator` is [multioutput](https://scikit-learn.org/1.1/glossary.html#term-multioutput), an extra dimension ‘n\_outputs’ is added to the end of each shape above.
See also
[`cross_val_score`](sklearn.model_selection.cross_val_score#sklearn.model_selection.cross_val_score "sklearn.model_selection.cross_val_score")
Calculate score for each CV split.
[`cross_validate`](sklearn.model_selection.cross_validate#sklearn.model_selection.cross_validate "sklearn.model_selection.cross_validate")
Calculate one or more scores and timings for each CV split.
#### Notes
In the case that one or more classes are absent in a training portion, a default score needs to be assigned to all instances for that class if `method` produces columns per class, as in {‘decision\_function’, ‘predict\_proba’, ‘predict\_log\_proba’}. For `predict_proba` this value is 0. In order to ensure finite output, we approximate negative infinity by the minimum finite float value for the dtype in other cases.
#### Examples
```
>>> from sklearn import datasets, linear_model
>>> from sklearn.model_selection import cross_val_predict
>>> diabetes = datasets.load_diabetes()
>>> X = diabetes.data[:150]
>>> y = diabetes.target[:150]
>>> lasso = linear_model.Lasso()
>>> y_pred = cross_val_predict(lasso, X, y, cv=3)
```
Examples using `sklearn.model_selection.cross_val_predict`
----------------------------------------------------------
[Combine predictors using stacking](../../auto_examples/ensemble/plot_stack_predictors#sphx-glr-auto-examples-ensemble-plot-stack-predictors-py)
[Plotting Cross-Validated Predictions](../../auto_examples/model_selection/plot_cv_predict#sphx-glr-auto-examples-model-selection-plot-cv-predict-py)
scikit_learn sklearn.tree.plot_tree sklearn.tree.plot\_tree
=======================
sklearn.tree.plot\_tree(*decision\_tree*, *\**, *max\_depth=None*, *feature\_names=None*, *class\_names=None*, *label='all'*, *filled=False*, *impurity=True*, *node\_ids=False*, *proportion=False*, *rounded=False*, *precision=3*, *ax=None*, *fontsize=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/tree/_export.py#L78)
Plot a decision tree.
The sample counts that are shown are weighted with any sample\_weights that might be present.
The visualization is fit automatically to the size of the axis. Use the `figsize` or `dpi` arguments of `plt.figure` to control the size of the rendering.
Read more in the [User Guide](../tree#tree).
New in version 0.21.
Parameters:
**decision\_tree**decision tree regressor or classifier
The decision tree to be plotted.
**max\_depth**int, default=None
The maximum depth of the representation. If None, the tree is fully generated.
**feature\_names**list of strings, default=None
Names of each of the features. If None, generic names will be used (“X[0]”, “X[1]”, …).
**class\_names**list of str or bool, default=None
Names of each of the target classes in ascending numerical order. Only relevant for classification and not supported for multi-output. If `True`, shows a symbolic representation of the class name.
**label**{‘all’, ‘root’, ‘none’}, default=’all’
Whether to show informative labels for impurity, etc. Options include ‘all’ to show at every node, ‘root’ to show only at the top root node, or ‘none’ to not show at any node.
**filled**bool, default=False
When set to `True`, paint nodes to indicate majority class for classification, extremity of values for regression, or purity of node for multi-output.
**impurity**bool, default=True
When set to `True`, show the impurity at each node.
**node\_ids**bool, default=False
When set to `True`, show the ID number on each node.
**proportion**bool, default=False
When set to `True`, change the display of ‘values’ and/or ‘samples’ to be proportions and percentages respectively.
**rounded**bool, default=False
When set to `True`, draw node boxes with rounded corners and use Helvetica fonts instead of Times-Roman.
**precision**int, default=3
Number of digits of precision for floating point in the values of impurity, threshold and value attributes of each node.
**ax**matplotlib axis, default=None
Axes to plot to. If None, use current axis. Any previous content is cleared.
**fontsize**int, default=None
Size of text font. If None, determined automatically to fit figure.
Returns:
**annotations**list of artists
List containing the artists for the annotation boxes making up the tree.
#### Examples
```
>>> from sklearn.datasets import load_iris
>>> from sklearn import tree
```
```
>>> clf = tree.DecisionTreeClassifier(random_state=0)
>>> iris = load_iris()
```
```
>>> clf = clf.fit(iris.data, iris.target)
>>> tree.plot_tree(clf)
[...]
```
Examples using `sklearn.tree.plot_tree`
---------------------------------------
[Plot the decision surface of decision trees trained on the iris dataset](../../auto_examples/tree/plot_iris_dtc#sphx-glr-auto-examples-tree-plot-iris-dtc-py)
[Understanding the decision tree structure](../../auto_examples/tree/plot_unveil_tree_structure#sphx-glr-auto-examples-tree-plot-unveil-tree-structure-py)
scikit_learn sklearn.tree.export_text sklearn.tree.export\_text
=========================
sklearn.tree.export\_text(*decision\_tree*, *\**, *feature\_names=None*, *max\_depth=10*, *spacing=3*, *decimals=2*, *show\_weights=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/tree/_export.py#L923)
Build a text report showing the rules of a decision tree.
Note that backwards compatibility may not be supported.
Parameters:
**decision\_tree**object
The decision tree estimator to be exported. It can be an instance of DecisionTreeClassifier or DecisionTreeRegressor.
**feature\_names**list of str, default=None
A list of length n\_features containing the feature names. If None generic names will be used (“feature\_0”, “feature\_1”, …).
**max\_depth**int, default=10
Only the first max\_depth levels of the tree are exported. Truncated branches will be marked with “…”.
**spacing**int, default=3
Number of spaces between edges. The higher it is, the wider the result.
**decimals**int, default=2
Number of decimal digits to display.
**show\_weights**bool, default=False
If true the classification weights will be exported on each leaf. The classification weights are the number of samples each class.
Returns:
**report**str
Text summary of all the rules in the decision tree.
#### Examples
```
>>> from sklearn.datasets import load_iris
>>> from sklearn.tree import DecisionTreeClassifier
>>> from sklearn.tree import export_text
>>> iris = load_iris()
>>> X = iris['data']
>>> y = iris['target']
>>> decision_tree = DecisionTreeClassifier(random_state=0, max_depth=2)
>>> decision_tree = decision_tree.fit(X, y)
>>> r = export_text(decision_tree, feature_names=iris['feature_names'])
>>> print(r)
|--- petal width (cm) <= 0.80
| |--- class: 0
|--- petal width (cm) > 0.80
| |--- petal width (cm) <= 1.75
| | |--- class: 1
| |--- petal width (cm) > 1.75
| | |--- class: 2
```
scikit_learn sklearn.decomposition.fastica sklearn.decomposition.fastica
=============================
sklearn.decomposition.fastica(*X*, *n\_components=None*, *\**, *algorithm='parallel'*, *whiten='warn'*, *fun='logcosh'*, *fun\_args=None*, *max\_iter=200*, *tol=0.0001*, *w\_init=None*, *random\_state=None*, *return\_X\_mean=False*, *compute\_sources=True*, *return\_n\_iter=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/decomposition/_fastica.py#L154)
Perform Fast Independent Component Analysis.
The implementation is based on [[1]](#r4ef46ec4ecf2-1).
Read more in the [User Guide](../decomposition#ica).
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training vector, where `n_samples` is the number of samples and `n_features` is the number of features.
**n\_components**int, default=None
Number of components to use. If None is passed, all are used.
**algorithm**{‘parallel’, ‘deflation’}, default=’parallel’
Specify which algorithm to use for FastICA.
**whiten**str or bool, default=”warn”
Specify the whitening strategy to use.
* If ‘arbitrary-variance’ (default), a whitening with variance arbitrary is used.
* If ‘unit-variance’, the whitening matrix is rescaled to ensure that each recovered source has unit variance.
* If False, the data is already considered to be whitened, and no whitening is performed.
Deprecated since version 1.1: Starting in v1.3, `whiten='unit-variance'` will be used by default. `whiten=True` is deprecated from 1.1 and will raise ValueError in 1.3. Use `whiten=arbitrary-variance` instead.
**fun**{‘logcosh’, ‘exp’, ‘cube’} or callable, default=’logcosh’
The functional form of the G function used in the approximation to neg-entropy. Could be either ‘logcosh’, ‘exp’, or ‘cube’. You can also provide your own function. It should return a tuple containing the value of the function, and of its derivative, in the point. The derivative should be averaged along its last dimension. Example:
```
def my_g(x):
return x ** 3, (3 * x ** 2).mean(axis=-1)
```
**fun\_args**dict, default=None
Arguments to send to the functional form. If empty or None and if fun=’logcosh’, fun\_args will take value {‘alpha’ : 1.0}.
**max\_iter**int, default=200
Maximum number of iterations to perform.
**tol**float, default=1e-4
A positive scalar giving the tolerance at which the un-mixing matrix is considered to have converged.
**w\_init**ndarray of shape (n\_components, n\_components), default=None
Initial un-mixing array. If `w_init=None`, then an array of values drawn from a normal distribution is used.
**random\_state**int, RandomState instance or None, default=None
Used to initialize `w_init` when not specified, with a normal distribution. Pass an int, for reproducible results across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**return\_X\_mean**bool, default=False
If True, X\_mean is returned too.
**compute\_sources**bool, default=True
If False, sources are not computed, but only the rotation matrix. This can save memory when working with big data. Defaults to True.
**return\_n\_iter**bool, default=False
Whether or not to return the number of iterations.
Returns:
**K**ndarray of shape (n\_components, n\_features) or None
If whiten is ‘True’, K is the pre-whitening matrix that projects data onto the first n\_components principal components. If whiten is ‘False’, K is ‘None’.
**W**ndarray of shape (n\_components, n\_components)
The square matrix that unmixes the data after whitening. The mixing matrix is the pseudo-inverse of matrix `W K` if K is not None, else it is the inverse of W.
**S**ndarray of shape (n\_samples, n\_components) or None
Estimated source matrix.
**X\_mean**ndarray of shape (n\_features,)
The mean over features. Returned only if return\_X\_mean is True.
**n\_iter**int
If the algorithm is “deflation”, n\_iter is the maximum number of iterations run across all components. Else they are just the number of iterations taken to converge. This is returned only when return\_n\_iter is set to `True`.
#### Notes
The data matrix X is considered to be a linear combination of non-Gaussian (independent) components i.e. X = AS where columns of S contain the independent components and A is a linear mixing matrix. In short ICA attempts to `un-mix' the data by estimating an
un-mixing matrix W where ``S = W K X.`` While FastICA was proposed to estimate as many sources as features, it is possible to estimate less by setting n\_components < n\_features. It this case K is not a square matrix and the estimated A is the pseudo-inverse of `W K`.
This implementation was originally made for data of shape [n\_features, n\_samples]. Now the input is transposed before the algorithm is applied. This makes it slightly faster for Fortran-ordered input.
#### References
[[1](#id1)] A. Hyvarinen and E. Oja, “Fast Independent Component Analysis”, Algorithms and Applications, Neural Networks, 13(4-5), 2000, pp. 411-430.
| programming_docs |
scikit_learn sklearn.linear_model.LogisticRegressionCV sklearn.linear\_model.LogisticRegressionCV
==========================================
*class*sklearn.linear\_model.LogisticRegressionCV(*\**, *Cs=10*, *fit\_intercept=True*, *cv=None*, *dual=False*, *penalty='l2'*, *scoring=None*, *solver='lbfgs'*, *tol=0.0001*, *max\_iter=100*, *class\_weight=None*, *n\_jobs=None*, *verbose=0*, *refit=True*, *intercept\_scaling=1.0*, *multi\_class='auto'*, *random\_state=None*, *l1\_ratios=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_logistic.py#L1344)
Logistic Regression CV (aka logit, MaxEnt) classifier.
See glossary entry for [cross-validation estimator](https://scikit-learn.org/1.1/glossary.html#term-cross-validation-estimator).
This class implements logistic regression using liblinear, newton-cg, sag of lbfgs optimizer. The newton-cg, sag and lbfgs solvers support only L2 regularization with primal formulation. The liblinear solver supports both L1 and L2 regularization, with a dual formulation only for the L2 penalty. Elastic-Net penalty is only supported by the saga solver.
For the grid of `Cs` values and `l1_ratios` values, the best hyperparameter is selected by the cross-validator [`StratifiedKFold`](sklearn.model_selection.stratifiedkfold#sklearn.model_selection.StratifiedKFold "sklearn.model_selection.StratifiedKFold"), but it can be changed using the [cv](https://scikit-learn.org/1.1/glossary.html#term-cv) parameter. The ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ solvers can warm-start the coefficients (see [Glossary](https://scikit-learn.org/1.1/glossary.html#term-warm_start)).
Read more in the [User Guide](../linear_model#logistic-regression).
Parameters:
**Cs**int or list of floats, default=10
Each of the values in Cs describes the inverse of regularization strength. If Cs is as an int, then a grid of Cs values are chosen in a logarithmic scale between 1e-4 and 1e4. Like in support vector machines, smaller values specify stronger regularization.
**fit\_intercept**bool, default=True
Specifies if a constant (a.k.a. bias or intercept) should be added to the decision function.
**cv**int or cross-validation generator, default=None
The default cross-validation generator used is Stratified K-Folds. If an integer is provided, then it is the number of folds used. See the module [`sklearn.model_selection`](../classes#module-sklearn.model_selection "sklearn.model_selection") module for the list of possible cross-validation objects.
Changed in version 0.22: `cv` default value if None changed from 3-fold to 5-fold.
**dual**bool, default=False
Dual or primal formulation. Dual formulation is only implemented for l2 penalty with liblinear solver. Prefer dual=False when n\_samples > n\_features.
**penalty**{‘l1’, ‘l2’, ‘elasticnet’}, default=’l2’
Specify the norm of the penalty:
* `'l2'`: add a L2 penalty term (used by default);
* `'l1'`: add a L1 penalty term;
* `'elasticnet'`: both L1 and L2 penalty terms are added.
Warning
Some penalties may not work with some solvers. See the parameter `solver` below, to know the compatibility between the penalty and solver.
**scoring**str or callable, default=None
A string (see model evaluation documentation) or a scorer callable object / function with signature `scorer(estimator, X, y)`. For a list of scoring functions that can be used, look at [`sklearn.metrics`](../classes#module-sklearn.metrics "sklearn.metrics"). The default scoring option used is ‘accuracy’.
**solver**{‘newton-cg’, ‘lbfgs’, ‘liblinear’, ‘sag’, ‘saga’}, default=’lbfgs’
Algorithm to use in the optimization problem. Default is ‘lbfgs’. To choose a solver, you might want to consider the following aspects:
* For small datasets, ‘liblinear’ is a good choice, whereas ‘sag’ and ‘saga’ are faster for large ones;
* For multiclass problems, only ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ handle multinomial loss;
* ‘liblinear’ might be slower in [`LogisticRegressionCV`](#sklearn.linear_model.LogisticRegressionCV "sklearn.linear_model.LogisticRegressionCV") because it does not handle warm-starting. ‘liblinear’ is limited to one-versus-rest schemes.
Warning
The choice of the algorithm depends on the penalty chosen:
* ‘newton-cg’ - [‘l2’]
* ‘lbfgs’ - [‘l2’]
* ‘liblinear’ - [‘l1’, ‘l2’]
* ‘sag’ - [‘l2’]
* ‘saga’ - [‘elasticnet’, ‘l1’, ‘l2’]
Note
‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from [`sklearn.preprocessing`](../classes#module-sklearn.preprocessing "sklearn.preprocessing").
New in version 0.17: Stochastic Average Gradient descent solver.
New in version 0.19: SAGA solver.
**tol**float, default=1e-4
Tolerance for stopping criteria.
**max\_iter**int, default=100
Maximum number of iterations of the optimization algorithm.
**class\_weight**dict or ‘balanced’, default=None
Weights associated with classes in the form `{class_label: weight}`. If not given, all classes are supposed to have weight one.
The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as `n_samples / (n_classes * np.bincount(y))`.
Note that these weights will be multiplied with sample\_weight (passed through the fit method) if sample\_weight is specified.
New in version 0.17: class\_weight == ‘balanced’
**n\_jobs**int, default=None
Number of CPU cores used during the cross-validation loop. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
**verbose**int, default=0
For the ‘liblinear’, ‘sag’ and ‘lbfgs’ solvers set verbose to any positive number for verbosity.
**refit**bool, default=True
If set to True, the scores are averaged across all folds, and the coefs and the C that corresponds to the best score is taken, and a final refit is done using these parameters. Otherwise the coefs, intercepts and C that correspond to the best scores across folds are averaged.
**intercept\_scaling**float, default=1
Useful only when the solver ‘liblinear’ is used and self.fit\_intercept is set to True. In this case, x becomes [x, self.intercept\_scaling], i.e. a “synthetic” feature with constant value equal to intercept\_scaling is appended to the instance vector. The intercept becomes `intercept_scaling * synthetic_feature_weight`.
Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept\_scaling has to be increased.
**multi\_class**{‘auto, ‘ovr’, ‘multinomial’}, default=’auto’
If the option chosen is ‘ovr’, then a binary problem is fit for each label. For ‘multinomial’ the loss minimised is the multinomial loss fit across the entire probability distribution, *even when the data is binary*. ‘multinomial’ is unavailable when solver=’liblinear’. ‘auto’ selects ‘ovr’ if the data is binary, or if solver=’liblinear’, and otherwise selects ‘multinomial’.
New in version 0.18: Stochastic Average Gradient descent solver for ‘multinomial’ case.
Changed in version 0.22: Default changed from ‘ovr’ to ‘auto’ in 0.22.
**random\_state**int, RandomState instance, default=None
Used when `solver='sag'`, ‘saga’ or ‘liblinear’ to shuffle the data. Note that this only applies to the solver and not the cross-validation generator. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state) for details.
**l1\_ratios**list of float, default=None
The list of Elastic-Net mixing parameter, with `0 <= l1_ratio <= 1`. Only used if `penalty='elasticnet'`. A value of 0 is equivalent to using `penalty='l2'`, while 1 is equivalent to using `penalty='l1'`. For `0 < l1_ratio <1`, the penalty is a combination of L1 and L2.
Attributes:
**classes\_**ndarray of shape (n\_classes, )
A list of class labels known to the classifier.
**coef\_**ndarray of shape (1, n\_features) or (n\_classes, n\_features)
Coefficient of the features in the decision function.
`coef_` is of shape (1, n\_features) when the given problem is binary.
**intercept\_**ndarray of shape (1,) or (n\_classes,)
Intercept (a.k.a. bias) added to the decision function.
If `fit_intercept` is set to False, the intercept is set to zero. `intercept_` is of shape(1,) when the problem is binary.
**Cs\_**ndarray of shape (n\_cs)
Array of C i.e. inverse of regularization parameter values used for cross-validation.
**l1\_ratios\_**ndarray of shape (n\_l1\_ratios)
Array of l1\_ratios used for cross-validation. If no l1\_ratio is used (i.e. penalty is not ‘elasticnet’), this is set to `[None]`
**coefs\_paths\_**ndarray of shape (n\_folds, n\_cs, n\_features) or (n\_folds, n\_cs, n\_features + 1)
dict with classes as the keys, and the path of coefficients obtained during cross-validating across each fold and then across each Cs after doing an OvR for the corresponding class as values. If the ‘multi\_class’ option is set to ‘multinomial’, then the coefs\_paths are the coefficients corresponding to each class. Each dict value has shape `(n_folds, n_cs, n_features)` or `(n_folds, n_cs, n_features + 1)` depending on whether the intercept is fit or not. If `penalty='elasticnet'`, the shape is `(n_folds, n_cs, n_l1_ratios_, n_features)` or `(n_folds, n_cs, n_l1_ratios_, n_features + 1)`.
**scores\_**dict
dict with classes as the keys, and the values as the grid of scores obtained during cross-validating each fold, after doing an OvR for the corresponding class. If the ‘multi\_class’ option given is ‘multinomial’ then the same scores are repeated across all classes, since this is the multinomial class. Each dict value has shape `(n_folds, n_cs` or `(n_folds, n_cs, n_l1_ratios)` if `penalty='elasticnet'`.
**C\_**ndarray of shape (n\_classes,) or (n\_classes - 1,)
Array of C that maps to the best scores across every class. If refit is set to False, then for each class, the best C is the average of the C’s that correspond to the best scores for each fold. `C_` is of shape(n\_classes,) when the problem is binary.
**l1\_ratio\_**ndarray of shape (n\_classes,) or (n\_classes - 1,)
Array of l1\_ratio that maps to the best scores across every class. If refit is set to False, then for each class, the best l1\_ratio is the average of the l1\_ratio’s that correspond to the best scores for each fold. `l1_ratio_` is of shape(n\_classes,) when the problem is binary.
**n\_iter\_**ndarray of shape (n\_classes, n\_folds, n\_cs) or (1, n\_folds, n\_cs)
Actual number of iterations for all classes, folds and Cs. In the binary or multinomial cases, the first dimension is equal to 1. If `penalty='elasticnet'`, the shape is `(n_classes, n_folds,
n_cs, n_l1_ratios)` or `(1, n_folds, n_cs, n_l1_ratios)`.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`LogisticRegression`](sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression "sklearn.linear_model.LogisticRegression")
Logistic regression without tuning the hyperparameter `C`.
#### Examples
```
>>> from sklearn.datasets import load_iris
>>> from sklearn.linear_model import LogisticRegressionCV
>>> X, y = load_iris(return_X_y=True)
>>> clf = LogisticRegressionCV(cv=5, random_state=0).fit(X, y)
>>> clf.predict(X[:2, :])
array([0, 0])
>>> clf.predict_proba(X[:2, :]).shape
(2, 3)
>>> clf.score(X, y)
0.98...
```
#### Methods
| | |
| --- | --- |
| [`decision_function`](#sklearn.linear_model.LogisticRegressionCV.decision_function "sklearn.linear_model.LogisticRegressionCV.decision_function")(X) | Predict confidence scores for samples. |
| [`densify`](#sklearn.linear_model.LogisticRegressionCV.densify "sklearn.linear_model.LogisticRegressionCV.densify")() | Convert coefficient matrix to dense array format. |
| [`fit`](#sklearn.linear_model.LogisticRegressionCV.fit "sklearn.linear_model.LogisticRegressionCV.fit")(X, y[, sample\_weight]) | Fit the model according to the given training data. |
| [`get_params`](#sklearn.linear_model.LogisticRegressionCV.get_params "sklearn.linear_model.LogisticRegressionCV.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.linear_model.LogisticRegressionCV.predict "sklearn.linear_model.LogisticRegressionCV.predict")(X) | Predict class labels for samples in X. |
| [`predict_log_proba`](#sklearn.linear_model.LogisticRegressionCV.predict_log_proba "sklearn.linear_model.LogisticRegressionCV.predict_log_proba")(X) | Predict logarithm of probability estimates. |
| [`predict_proba`](#sklearn.linear_model.LogisticRegressionCV.predict_proba "sklearn.linear_model.LogisticRegressionCV.predict_proba")(X) | Probability estimates. |
| [`score`](#sklearn.linear_model.LogisticRegressionCV.score "sklearn.linear_model.LogisticRegressionCV.score")(X, y[, sample\_weight]) | Score using the `scoring` option on the given test data and labels. |
| [`set_params`](#sklearn.linear_model.LogisticRegressionCV.set_params "sklearn.linear_model.LogisticRegressionCV.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`sparsify`](#sklearn.linear_model.LogisticRegressionCV.sparsify "sklearn.linear_model.LogisticRegressionCV.sparsify")() | Convert coefficient matrix to sparse format. |
decision\_function(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L408)
Predict confidence scores for samples.
The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The data matrix for which we want to get the confidence scores.
Returns:
**scores**ndarray of shape (n\_samples,) or (n\_samples, n\_classes)
Confidence scores per `(n_samples, n_classes)` combination. In the binary case, confidence score for `self.classes_[1]` where >0 means this class would be predicted.
densify()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L477)
Convert coefficient matrix to dense array format.
Converts the `coef_` member (back) to a numpy.ndarray. This is the default format of `coef_` and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op.
Returns:
self
Fitted estimator.
fit(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_logistic.py#L1651)
Fit the model according to the given training data.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Training vector, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**array-like of shape (n\_samples,)
Target vector relative to X.
**sample\_weight**array-like of shape (n\_samples,) default=None
Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight.
Returns:
**self**object
Fitted LogisticRegressionCV estimator.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L433)
Predict class labels for samples in X.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The data matrix for which we want to get the predictions.
Returns:
**y\_pred**ndarray of shape (n\_samples,)
Vector containing the class labels for each sample.
predict\_log\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_logistic.py#L1322)
Predict logarithm of probability estimates.
The returned estimates for all classes are ordered by the label of classes.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Vector to be scored, where `n_samples` is the number of samples and `n_features` is the number of features.
Returns:
**T**array-like of shape (n\_samples, n\_classes)
Returns the log-probability of the sample for each class in the model, where classes are ordered as they are in `self.classes_`.
predict\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_logistic.py#L1278)
Probability estimates.
The returned estimates for all classes are ordered by the label of classes.
For a multi\_class problem, if multi\_class is set to be “multinomial” the softmax function is used to find the predicted probability of each class. Else use a one-vs-rest approach, i.e calculate the probability of each class assuming it to be positive using the logistic function. and normalize these values across all the classes.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Vector to be scored, where `n_samples` is the number of samples and `n_features` is the number of features.
Returns:
**T**array-like of shape (n\_samples, n\_classes)
Returns the probability of the sample for each class in the model, where classes are ordered as they are in `self.classes_`.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_logistic.py#L1999)
Score using the `scoring` option on the given test data and labels.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples.
**y**array-like of shape (n\_samples,)
True labels for X.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
Score of self.predict(X) wrt. y.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
sparsify()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L497)
Convert coefficient matrix to sparse format.
Converts the `coef_` member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation.
The `intercept_` member is not converted.
Returns:
self
Fitted estimator.
#### Notes
For non-sparse models, i.e. when there are not many zeros in `coef_`, this may actually *increase* memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with `(coef_ == 0).sum()`, must be more than 50% for this to provide significant benefits.
After calling this method, further fitting with the partial\_fit method (if any) will not work until you call densify.
| programming_docs |
scikit_learn sklearn.metrics.ConfusionMatrixDisplay sklearn.metrics.ConfusionMatrixDisplay
======================================
*class*sklearn.metrics.ConfusionMatrixDisplay(*confusion\_matrix*, *\**, *display\_labels=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/_plot/confusion_matrix.py#L12)
Confusion Matrix visualization.
It is recommend to use [`from_estimator`](#sklearn.metrics.ConfusionMatrixDisplay.from_estimator "sklearn.metrics.ConfusionMatrixDisplay.from_estimator") or [`from_predictions`](#sklearn.metrics.ConfusionMatrixDisplay.from_predictions "sklearn.metrics.ConfusionMatrixDisplay.from_predictions") to create a [`ConfusionMatrixDisplay`](#sklearn.metrics.ConfusionMatrixDisplay "sklearn.metrics.ConfusionMatrixDisplay"). All parameters are stored as attributes.
Read more in the [User Guide](https://scikit-learn.org/1.1/visualizations.html#visualizations).
Parameters:
**confusion\_matrix**ndarray of shape (n\_classes, n\_classes)
Confusion matrix.
**display\_labels**ndarray of shape (n\_classes,), default=None
Display labels for plot. If None, display labels are set from 0 to `n_classes - 1`.
Attributes:
**im\_**matplotlib AxesImage
Image representing the confusion matrix.
**text\_**ndarray of shape (n\_classes, n\_classes), dtype=matplotlib Text, or None
Array of matplotlib axes. `None` if `include_values` is false.
**ax\_**matplotlib Axes
Axes with confusion matrix.
**figure\_**matplotlib Figure
Figure containing the confusion matrix.
See also
[`confusion_matrix`](sklearn.metrics.confusion_matrix#sklearn.metrics.confusion_matrix "sklearn.metrics.confusion_matrix")
Compute Confusion Matrix to evaluate the accuracy of a classification.
[`ConfusionMatrixDisplay.from_estimator`](#sklearn.metrics.ConfusionMatrixDisplay.from_estimator "sklearn.metrics.ConfusionMatrixDisplay.from_estimator")
Plot the confusion matrix given an estimator, the data, and the label.
[`ConfusionMatrixDisplay.from_predictions`](#sklearn.metrics.ConfusionMatrixDisplay.from_predictions "sklearn.metrics.ConfusionMatrixDisplay.from_predictions")
Plot the confusion matrix given the true and predicted labels.
#### Examples
```
>>> import matplotlib.pyplot as plt
>>> from sklearn.datasets import make_classification
>>> from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.svm import SVC
>>> X, y = make_classification(random_state=0)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y,
... random_state=0)
>>> clf = SVC(random_state=0)
>>> clf.fit(X_train, y_train)
SVC(random_state=0)
>>> predictions = clf.predict(X_test)
>>> cm = confusion_matrix(y_test, predictions, labels=clf.classes_)
>>> disp = ConfusionMatrixDisplay(confusion_matrix=cm,
... display_labels=clf.classes_)
>>> disp.plot()
<...>
>>> plt.show()
```
#### Methods
| | |
| --- | --- |
| [`from_estimator`](#sklearn.metrics.ConfusionMatrixDisplay.from_estimator "sklearn.metrics.ConfusionMatrixDisplay.from_estimator")(estimator, X, y, \*[, labels, ...]) | Plot Confusion Matrix given an estimator and some data. |
| [`from_predictions`](#sklearn.metrics.ConfusionMatrixDisplay.from_predictions "sklearn.metrics.ConfusionMatrixDisplay.from_predictions")(y\_true, y\_pred, \*[, ...]) | Plot Confusion Matrix given true and predicted labels. |
| [`plot`](#sklearn.metrics.ConfusionMatrixDisplay.plot "sklearn.metrics.ConfusionMatrixDisplay.plot")(\*[, include\_values, cmap, ...]) | Plot visualization. |
*classmethod*from\_estimator(*estimator*, *X*, *y*, *\**, *labels=None*, *sample\_weight=None*, *normalize=None*, *display\_labels=None*, *include\_values=True*, *xticks\_rotation='horizontal'*, *values\_format=None*, *cmap='viridis'*, *ax=None*, *colorbar=True*, *im\_kw=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/_plot/confusion_matrix.py#L188)
Plot Confusion Matrix given an estimator and some data.
Read more in the [User Guide](../model_evaluation#confusion-matrix).
New in version 1.0.
Parameters:
**estimator**estimator instance
Fitted classifier or a fitted [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline") in which the last estimator is a classifier.
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Input values.
**y**array-like of shape (n\_samples,)
Target values.
**labels**array-like of shape (n\_classes,), default=None
List of labels to index the confusion matrix. This may be used to reorder or select a subset of labels. If `None` is given, those that appear at least once in `y_true` or `y_pred` are used in sorted order.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
**normalize**{‘true’, ‘pred’, ‘all’}, default=None
Either to normalize the counts display in the matrix:
* if `'true'`, the confusion matrix is normalized over the true conditions (e.g. rows);
* if `'pred'`, the confusion matrix is normalized over the predicted conditions (e.g. columns);
* if `'all'`, the confusion matrix is normalized by the total number of samples;
* if `None` (default), the confusion matrix will not be normalized.
**display\_labels**array-like of shape (n\_classes,), default=None
Target names used for plotting. By default, `labels` will be used if it is defined, otherwise the unique labels of `y_true` and `y_pred` will be used.
**include\_values**bool, default=True
Includes values in confusion matrix.
**xticks\_rotation**{‘vertical’, ‘horizontal’} or float, default=’horizontal’
Rotation of xtick labels.
**values\_format**str, default=None
Format specification for values in confusion matrix. If `None`, the format specification is ‘d’ or ‘.2g’ whichever is shorter.
**cmap**str or matplotlib Colormap, default=’viridis’
Colormap recognized by matplotlib.
**ax**matplotlib Axes, default=None
Axes object to plot on. If `None`, a new figure and axes is created.
**colorbar**bool, default=True
Whether or not to add a colorbar to the plot.
**im\_kw**dict, default=None
Dict with keywords passed to `matplotlib.pyplot.imshow` call.
Returns:
**display**[`ConfusionMatrixDisplay`](#sklearn.metrics.ConfusionMatrixDisplay "sklearn.metrics.ConfusionMatrixDisplay")
See also
[`ConfusionMatrixDisplay.from_predictions`](#sklearn.metrics.ConfusionMatrixDisplay.from_predictions "sklearn.metrics.ConfusionMatrixDisplay.from_predictions")
Plot the confusion matrix given the true and predicted labels.
#### Examples
```
>>> import matplotlib.pyplot as plt
>>> from sklearn.datasets import make_classification
>>> from sklearn.metrics import ConfusionMatrixDisplay
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.svm import SVC
>>> X, y = make_classification(random_state=0)
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, random_state=0)
>>> clf = SVC(random_state=0)
>>> clf.fit(X_train, y_train)
SVC(random_state=0)
>>> ConfusionMatrixDisplay.from_estimator(
... clf, X_test, y_test)
<...>
>>> plt.show()
```
*classmethod*from\_predictions(*y\_true*, *y\_pred*, *\**, *labels=None*, *sample\_weight=None*, *normalize=None*, *display\_labels=None*, *include\_values=True*, *xticks\_rotation='horizontal'*, *values\_format=None*, *cmap='viridis'*, *ax=None*, *colorbar=True*, *im\_kw=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/_plot/confusion_matrix.py#L323)
Plot Confusion Matrix given true and predicted labels.
Read more in the [User Guide](../model_evaluation#confusion-matrix).
New in version 1.0.
Parameters:
**y\_true**array-like of shape (n\_samples,)
True labels.
**y\_pred**array-like of shape (n\_samples,)
The predicted labels given by the method `predict` of an classifier.
**labels**array-like of shape (n\_classes,), default=None
List of labels to index the confusion matrix. This may be used to reorder or select a subset of labels. If `None` is given, those that appear at least once in `y_true` or `y_pred` are used in sorted order.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
**normalize**{‘true’, ‘pred’, ‘all’}, default=None
Either to normalize the counts display in the matrix:
* if `'true'`, the confusion matrix is normalized over the true conditions (e.g. rows);
* if `'pred'`, the confusion matrix is normalized over the predicted conditions (e.g. columns);
* if `'all'`, the confusion matrix is normalized by the total number of samples;
* if `None` (default), the confusion matrix will not be normalized.
**display\_labels**array-like of shape (n\_classes,), default=None
Target names used for plotting. By default, `labels` will be used if it is defined, otherwise the unique labels of `y_true` and `y_pred` will be used.
**include\_values**bool, default=True
Includes values in confusion matrix.
**xticks\_rotation**{‘vertical’, ‘horizontal’} or float, default=’horizontal’
Rotation of xtick labels.
**values\_format**str, default=None
Format specification for values in confusion matrix. If `None`, the format specification is ‘d’ or ‘.2g’ whichever is shorter.
**cmap**str or matplotlib Colormap, default=’viridis’
Colormap recognized by matplotlib.
**ax**matplotlib Axes, default=None
Axes object to plot on. If `None`, a new figure and axes is created.
**colorbar**bool, default=True
Whether or not to add a colorbar to the plot.
**im\_kw**dict, default=None
Dict with keywords passed to `matplotlib.pyplot.imshow` call.
Returns:
**display**[`ConfusionMatrixDisplay`](#sklearn.metrics.ConfusionMatrixDisplay "sklearn.metrics.ConfusionMatrixDisplay")
See also
[`ConfusionMatrixDisplay.from_estimator`](#sklearn.metrics.ConfusionMatrixDisplay.from_estimator "sklearn.metrics.ConfusionMatrixDisplay.from_estimator")
Plot the confusion matrix given an estimator, the data, and the label.
#### Examples
```
>>> import matplotlib.pyplot as plt
>>> from sklearn.datasets import make_classification
>>> from sklearn.metrics import ConfusionMatrixDisplay
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.svm import SVC
>>> X, y = make_classification(random_state=0)
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, random_state=0)
>>> clf = SVC(random_state=0)
>>> clf.fit(X_train, y_train)
SVC(random_state=0)
>>> y_pred = clf.predict(X_test)
>>> ConfusionMatrixDisplay.from_predictions(
... y_test, y_pred)
<...>
>>> plt.show()
```
plot(*\**, *include\_values=True*, *cmap='viridis'*, *xticks\_rotation='horizontal'*, *values\_format=None*, *ax=None*, *colorbar=True*, *im\_kw=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/_plot/confusion_matrix.py#L82)
Plot visualization.
Parameters:
**include\_values**bool, default=True
Includes values in confusion matrix.
**cmap**str or matplotlib Colormap, default=’viridis’
Colormap recognized by matplotlib.
**xticks\_rotation**{‘vertical’, ‘horizontal’} or float, default=’horizontal’
Rotation of xtick labels.
**values\_format**str, default=None
Format specification for values in confusion matrix. If `None`, the format specification is ‘d’ or ‘.2g’ whichever is shorter.
**ax**matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is created.
**colorbar**bool, default=True
Whether or not to add a colorbar to the plot.
**im\_kw**dict, default=None
Dict with keywords passed to `matplotlib.pyplot.imshow` call.
Returns:
**display**[`ConfusionMatrixDisplay`](#sklearn.metrics.ConfusionMatrixDisplay "sklearn.metrics.ConfusionMatrixDisplay")
Examples using `sklearn.metrics.ConfusionMatrixDisplay`
-------------------------------------------------------
[Visualizations with Display Objects](../../auto_examples/miscellaneous/plot_display_object_visualization#sphx-glr-auto-examples-miscellaneous-plot-display-object-visualization-py)
scikit_learn sklearn.metrics.pairwise_distances_argmin sklearn.metrics.pairwise\_distances\_argmin
===========================================
sklearn.metrics.pairwise\_distances\_argmin(*X*, *Y*, *\**, *axis=1*, *metric='euclidean'*, *metric\_kwargs=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/pairwise.py#L708)
Compute minimum distances between one point and a set of points.
This function computes for each row in X, the index of the row of Y which is closest (according to the specified distance).
This is mostly equivalent to calling:
pairwise\_distances(X, Y=Y, metric=metric).argmin(axis=axis)
but uses much less memory, and is faster for large arrays.
This function works with dense 2D arrays only.
Parameters:
**X**array-like of shape (n\_samples\_X, n\_features)
Array containing points.
**Y**array-like of shape (n\_samples\_Y, n\_features)
Arrays containing points.
**axis**int, default=1
Axis along which the argmin and distances are to be computed.
**metric**str or callable, default=”euclidean”
Metric to use for distance computation. Any metric from scikit-learn or scipy.spatial.distance can be used.
If metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays as input and return one value indicating the distance between them. This works for Scipy’s metrics, but is less efficient than passing the metric name as a string.
Distance matrices are not supported.
Valid values for metric are:
* from scikit-learn: [‘cityblock’, ‘cosine’, ‘euclidean’, ‘l1’, ‘l2’, ‘manhattan’]
* from scipy.spatial.distance: [‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘correlation’, ‘dice’, ‘hamming’, ‘jaccard’, ‘kulsinski’, ‘mahalanobis’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘yule’]
See the documentation for scipy.spatial.distance for details on these metrics.
**metric\_kwargs**dict, default=None
Keyword arguments to pass to specified metric function.
Returns:
**argmin**numpy.ndarray
Y[argmin[i], :] is the row in Y that is closest to X[i, :].
See also
[`pairwise_distances`](sklearn.metrics.pairwise_distances#sklearn.metrics.pairwise_distances "sklearn.metrics.pairwise_distances")
Distances between every pair of samples of X and Y.
[`pairwise_distances_argmin_min`](sklearn.metrics.pairwise_distances_argmin_min#sklearn.metrics.pairwise_distances_argmin_min "sklearn.metrics.pairwise_distances_argmin_min")
Same as `pairwise_distances_argmin` but also returns the distances.
Examples using `sklearn.metrics.pairwise_distances_argmin`
----------------------------------------------------------
[Color Quantization using K-Means](../../auto_examples/cluster/plot_color_quantization#sphx-glr-auto-examples-cluster-plot-color-quantization-py)
[Comparison of the K-Means and MiniBatchKMeans clustering algorithms](../../auto_examples/cluster/plot_mini_batch_kmeans#sphx-glr-auto-examples-cluster-plot-mini-batch-kmeans-py)
scikit_learn sklearn.metrics.label_ranking_loss sklearn.metrics.label\_ranking\_loss
====================================
sklearn.metrics.label\_ranking\_loss(*y\_true*, *y\_score*, *\**, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/_ranking.py#L1177)
Compute Ranking loss measure.
Compute the average number of label pairs that are incorrectly ordered given y\_score weighted by the size of the label set and the number of labels not in the label set.
This is similar to the error set size, but weighted by the number of relevant and irrelevant labels. The best performance is achieved with a ranking loss of zero.
Read more in the [User Guide](../model_evaluation#label-ranking-loss).
New in version 0.17: A function *label\_ranking\_loss*
Parameters:
**y\_true**{ndarray, sparse matrix} of shape (n\_samples, n\_labels)
True binary labels in binary indicator format.
**y\_score**ndarray of shape (n\_samples, n\_labels)
Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by “decision\_function” on some classifiers).
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**loss**float
Average number of label pairs that are incorrectly ordered given y\_score weighted by the size of the label set and the number of labels not in the label set.
#### References
[1] Tsoumakas, G., Katakis, I., & Vlahavas, I. (2010). Mining multi-label data. In Data mining and knowledge discovery handbook (pp. 667-685). Springer US.
scikit_learn sklearn.config_context sklearn.config\_context
=======================
sklearn.config\_context(*\**, *assume\_finite=None*, *working\_memory=None*, *print\_changed\_only=None*, *display=None*, *pairwise\_dist\_chunk\_size=None*, *enable\_cython\_pairwise\_dist=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/_config.py#L133)
Context manager for global scikit-learn configuration.
Parameters:
**assume\_finite**bool, default=None
If True, validation for finiteness will be skipped, saving time, but leading to potential crashes. If False, validation for finiteness will be performed, avoiding error. If None, the existing value won’t change. The default value is False.
**working\_memory**int, default=None
If set, scikit-learn will attempt to limit the size of temporary arrays to this number of MiB (per job when parallelised), often saving both computation time and memory on expensive operations that can be performed in chunks. If None, the existing value won’t change. The default value is 1024.
**print\_changed\_only**bool, default=None
If True, only the parameters that were set to non-default values will be printed when printing an estimator. For example, `print(SVC())` while True will only print ‘SVC()’, but would print ‘SVC(C=1.0, cache\_size=200, …)’ with all the non-changed parameters when False. If None, the existing value won’t change. The default value is True.
Changed in version 0.23: Default changed from False to True.
**display**{‘text’, ‘diagram’}, default=None
If ‘diagram’, estimators will be displayed as a diagram in a Jupyter lab or notebook context. If ‘text’, estimators will be displayed as text. If None, the existing value won’t change. The default value is ‘diagram’.
New in version 0.23.
**pairwise\_dist\_chunk\_size**int, default=None
The number of vectors per chunk for PairwiseDistancesReduction. Default is 256 (suitable for most of modern laptops’ caches and architectures).
Intended for easier benchmarking and testing of scikit-learn internals. End users are not expected to benefit from customizing this configuration setting.
New in version 1.1.
**enable\_cython\_pairwise\_dist**bool, default=None
Use PairwiseDistancesReduction when possible. Default is True.
Intended for easier benchmarking and testing of scikit-learn internals. End users are not expected to benefit from customizing this configuration setting.
New in version 1.1.
Yields:
None.
See also
[`set_config`](sklearn.set_config#sklearn.set_config "sklearn.set_config")
Set global scikit-learn configuration.
[`get_config`](sklearn.get_config#sklearn.get_config "sklearn.get_config")
Retrieve current values of the global configuration.
#### Notes
All settings, not just those presently modified, will be returned to their previous values when the context manager is exited.
#### Examples
```
>>> import sklearn
>>> from sklearn.utils.validation import assert_all_finite
>>> with sklearn.config_context(assume_finite=True):
... assert_all_finite([float('nan')])
>>> with sklearn.config_context(assume_finite=True):
... with sklearn.config_context(assume_finite=False):
... assert_all_finite([float('nan')])
Traceback (most recent call last):
...
ValueError: Input contains NaN...
```
| programming_docs |
scikit_learn sklearn.datasets.fetch_california_housing sklearn.datasets.fetch\_california\_housing
===========================================
sklearn.datasets.fetch\_california\_housing(*\**, *data\_home=None*, *download\_if\_missing=True*, *return\_X\_y=False*, *as\_frame=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/datasets/_california_housing.py#L53)
Load the California housing dataset (regression).
| | |
| --- | --- |
| Samples total | 20640 |
| Dimensionality | 8 |
| Features | real |
| Target | real 0.15 - 5. |
Read more in the [User Guide](https://scikit-learn.org/1.1/datasets/real_world.html#california-housing-dataset).
Parameters:
**data\_home**str, default=None
Specify another download and cache folder for the datasets. By default all scikit-learn data is stored in ‘~/scikit\_learn\_data’ subfolders.
**download\_if\_missing**bool, default=True
If False, raise a IOError if the data is not locally available instead of trying to download the data from the source site.
**return\_X\_y**bool, default=False
If True, returns `(data.data, data.target)` instead of a Bunch object.
New in version 0.20.
**as\_frame**bool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric, string or categorical). The target is a pandas DataFrame or Series depending on the number of target\_columns.
New in version 0.23.
Returns:
**dataset**[`Bunch`](sklearn.utils.bunch#sklearn.utils.Bunch "sklearn.utils.Bunch")
Dictionary-like object, with the following attributes.
datandarray, shape (20640, 8)
Each row corresponding to the 8 feature values in order. If `as_frame` is True, `data` is a pandas object.
targetnumpy array of shape (20640,)
Each value corresponds to the average house value in units of 100,000. If `as_frame` is True, `target` is a pandas object.
feature\_nameslist of length 8
Array of ordered feature names used in the dataset.
DESCRstr
Description of the California housing dataset.
framepandas DataFrame
Only present when `as_frame=True`. DataFrame with `data` and `target`.
New in version 0.23.
**(data, target)**tuple if `return_X_y` is True
A tuple of two ndarray. The first containing a 2D array of shape (n\_samples, n\_features) with each row representing one sample and each column representing the features. The second ndarray of shape (n\_samples,) containing the target samples.
New in version 0.20.
#### Notes
This dataset consists of 20,640 samples and 9 features.
Examples using `sklearn.datasets.fetch_california_housing`
----------------------------------------------------------
[Release Highlights for scikit-learn 0.24](../../auto_examples/release_highlights/plot_release_highlights_0_24_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-0-24-0-py)
[Partial Dependence and Individual Conditional Expectation Plots](../../auto_examples/inspection/plot_partial_dependence#sphx-glr-auto-examples-inspection-plot-partial-dependence-py)
[Imputing missing values before building an estimator](../../auto_examples/impute/plot_missing_values#sphx-glr-auto-examples-impute-plot-missing-values-py)
[Imputing missing values with variants of IterativeImputer](../../auto_examples/impute/plot_iterative_imputer_variants_comparison#sphx-glr-auto-examples-impute-plot-iterative-imputer-variants-comparison-py)
[Compare the effect of different scalers on data with outliers](../../auto_examples/preprocessing/plot_all_scaling#sphx-glr-auto-examples-preprocessing-plot-all-scaling-py)
scikit_learn sklearn.datasets.make_s_curve sklearn.datasets.make\_s\_curve
===============================
sklearn.datasets.make\_s\_curve(*n\_samples=100*, *\**, *noise=0.0*, *random\_state=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/datasets/_samples_generator.py#L1564)
Generate an S curve dataset.
Read more in the [User Guide](https://scikit-learn.org/1.1/datasets/sample_generators.html#sample-generators).
Parameters:
**n\_samples**int, default=100
The number of sample points on the S curve.
**noise**float, default=0.0
The standard deviation of the gaussian noise.
**random\_state**int, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
Returns:
**X**ndarray of shape (n\_samples, 3)
The points.
**t**ndarray of shape (n\_samples,)
The univariate position of the sample according to the main dimension of the points in the manifold.
Examples using `sklearn.datasets.make_s_curve`
----------------------------------------------
[Comparison of Manifold Learning methods](../../auto_examples/manifold/plot_compare_methods#sphx-glr-auto-examples-manifold-plot-compare-methods-py)
[t-SNE: The effect of various perplexity values on the shape](../../auto_examples/manifold/plot_t_sne_perplexity#sphx-glr-auto-examples-manifold-plot-t-sne-perplexity-py)
scikit_learn sklearn.datasets.load_digits sklearn.datasets.load\_digits
=============================
sklearn.datasets.load\_digits(*\**, *n\_class=10*, *return\_X\_y=False*, *as\_frame=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/datasets/_base.py#L821)
Load and return the digits dataset (classification).
Each datapoint is a 8x8 image of a digit.
| | |
| --- | --- |
| Classes | 10 |
| Samples per class | ~180 |
| Samples total | 1797 |
| Dimensionality | 64 |
| Features | integers 0-16 |
This is a copy of the test set of the UCI ML hand-written digits datasets <https://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of+Handwritten+Digits>
Read more in the [User Guide](https://scikit-learn.org/1.1/datasets/toy_dataset.html#digits-dataset).
Parameters:
**n\_class**int, default=10
The number of classes to return. Between 0 and 10.
**return\_X\_y**bool, default=False
If True, returns `(data, target)` instead of a Bunch object. See below for more information about the `data` and `target` object.
New in version 0.18.
**as\_frame**bool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric). The target is a pandas DataFrame or Series depending on the number of target columns. If `return_X_y` is True, then (`data`, `target`) will be pandas DataFrames or Series as described below.
New in version 0.23.
Returns:
**data**[`Bunch`](sklearn.utils.bunch#sklearn.utils.Bunch "sklearn.utils.Bunch")
Dictionary-like object, with the following attributes.
data{ndarray, dataframe} of shape (1797, 64)
The flattened data matrix. If `as_frame=True`, `data` will be a pandas DataFrame.
target: {ndarray, Series} of shape (1797,)
The classification target. If `as_frame=True`, `target` will be a pandas Series.
feature\_names: list
The names of the dataset columns.
target\_names: list
The names of target classes.
New in version 0.20.
frame: DataFrame of shape (1797, 65)
Only present when `as_frame=True`. DataFrame with `data` and `target`.
New in version 0.23.
images: {ndarray} of shape (1797, 8, 8)
The raw image data.
DESCR: str
The full description of the dataset.
**(data, target)**tuple if `return_X_y` is True
A tuple of two ndarrays by default. The first contains a 2D ndarray of shape (1797, 64) with each row representing one sample and each column representing the features. The second ndarray of shape (1797) contains the target samples. If `as_frame=True`, both arrays are pandas objects, i.e. `X` a dataframe and `y` a series.
New in version 0.18.
#### Examples
To load the data and visualize the images:
```
>>> from sklearn.datasets import load_digits
>>> digits = load_digits()
>>> print(digits.data.shape)
(1797, 64)
>>> import matplotlib.pyplot as plt
>>> plt.gray()
>>> plt.matshow(digits.images[0])
<...>
>>> plt.show()
```
Examples using `sklearn.datasets.load_digits`
---------------------------------------------
[Recognizing hand-written digits](../../auto_examples/classification/plot_digits_classification#sphx-glr-auto-examples-classification-plot-digits-classification-py)
[A demo of K-Means clustering on the handwritten digits data](../../auto_examples/cluster/plot_kmeans_digits#sphx-glr-auto-examples-cluster-plot-kmeans-digits-py)
[Feature agglomeration](../../auto_examples/cluster/plot_digits_agglomeration#sphx-glr-auto-examples-cluster-plot-digits-agglomeration-py)
[Various Agglomerative Clustering on a 2D embedding of digits](../../auto_examples/cluster/plot_digits_linkage#sphx-glr-auto-examples-cluster-plot-digits-linkage-py)
[The Digit Dataset](../../auto_examples/datasets/plot_digits_last_image#sphx-glr-auto-examples-datasets-plot-digits-last-image-py)
[Recursive feature elimination](../../auto_examples/feature_selection/plot_rfe_digits#sphx-glr-auto-examples-feature-selection-plot-rfe-digits-py)
[Comparing various online solvers](../../auto_examples/linear_model/plot_sgd_comparison#sphx-glr-auto-examples-linear-model-plot-sgd-comparison-py)
[L1 Penalty and Sparsity in Logistic Regression](../../auto_examples/linear_model/plot_logistic_l1_l2_sparsity#sphx-glr-auto-examples-linear-model-plot-logistic-l1-l2-sparsity-py)
[Manifold learning on handwritten digits: Locally Linear Embedding, Isomap…](../../auto_examples/manifold/plot_lle_digits#sphx-glr-auto-examples-manifold-plot-lle-digits-py)
[Explicit feature map approximation for RBF kernels](../../auto_examples/miscellaneous/plot_kernel_approximation#sphx-glr-auto-examples-miscellaneous-plot-kernel-approximation-py)
[The Johnson-Lindenstrauss bound for embedding with random projections](../../auto_examples/miscellaneous/plot_johnson_lindenstrauss_bound#sphx-glr-auto-examples-miscellaneous-plot-johnson-lindenstrauss-bound-py)
[Balance model complexity and cross-validated score](../../auto_examples/model_selection/plot_grid_search_refit_callable#sphx-glr-auto-examples-model-selection-plot-grid-search-refit-callable-py)
[Comparing randomized search and grid search for hyperparameter estimation](../../auto_examples/model_selection/plot_randomized_search#sphx-glr-auto-examples-model-selection-plot-randomized-search-py)
[Custom refit strategy of a grid search with cross-validation](../../auto_examples/model_selection/plot_grid_search_digits#sphx-glr-auto-examples-model-selection-plot-grid-search-digits-py)
[Plotting Learning Curves](../../auto_examples/model_selection/plot_learning_curve#sphx-glr-auto-examples-model-selection-plot-learning-curve-py)
[Plotting Validation Curves](../../auto_examples/model_selection/plot_validation_curve#sphx-glr-auto-examples-model-selection-plot-validation-curve-py)
[Caching nearest neighbors](../../auto_examples/neighbors/plot_caching_nearest_neighbors#sphx-glr-auto-examples-neighbors-plot-caching-nearest-neighbors-py)
[Dimensionality Reduction with Neighborhood Components Analysis](../../auto_examples/neighbors/plot_nca_dim_reduction#sphx-glr-auto-examples-neighbors-plot-nca-dim-reduction-py)
[Kernel Density Estimation](../../auto_examples/neighbors/plot_digits_kde_sampling#sphx-glr-auto-examples-neighbors-plot-digits-kde-sampling-py)
[Compare Stochastic learning strategies for MLPClassifier](../../auto_examples/neural_networks/plot_mlp_training_curves#sphx-glr-auto-examples-neural-networks-plot-mlp-training-curves-py)
[Restricted Boltzmann Machine features for digit classification](../../auto_examples/neural_networks/plot_rbm_logistic_classification#sphx-glr-auto-examples-neural-networks-plot-rbm-logistic-classification-py)
[Pipelining: chaining a PCA and a logistic regression](../../auto_examples/compose/plot_digits_pipe#sphx-glr-auto-examples-compose-plot-digits-pipe-py)
[Selecting dimensionality reduction with Pipeline and GridSearchCV](../../auto_examples/compose/plot_compare_reduction#sphx-glr-auto-examples-compose-plot-compare-reduction-py)
[Label Propagation digits active learning](../../auto_examples/semi_supervised/plot_label_propagation_digits_active_learning#sphx-glr-auto-examples-semi-supervised-plot-label-propagation-digits-active-learning-py)
[Label Propagation digits: Demonstrating performance](../../auto_examples/semi_supervised/plot_label_propagation_digits#sphx-glr-auto-examples-semi-supervised-plot-label-propagation-digits-py)
[Cross-validation on Digits Dataset Exercise](../../auto_examples/exercises/plot_cv_digits#sphx-glr-auto-examples-exercises-plot-cv-digits-py)
[Digits Classification Exercise](../../auto_examples/exercises/plot_digits_classification_exercise#sphx-glr-auto-examples-exercises-plot-digits-classification-exercise-py)
scikit_learn sklearn.decomposition.dict_learning_online sklearn.decomposition.dict\_learning\_online
============================================
sklearn.decomposition.dict\_learning\_online(*X*, *n\_components=2*, *\**, *alpha=1*, *n\_iter='deprecated'*, *max\_iter=None*, *return\_code=True*, *dict\_init=None*, *callback=None*, *batch\_size='warn'*, *verbose=False*, *shuffle=True*, *n\_jobs=None*, *method='lars'*, *iter\_offset='deprecated'*, *random\_state=None*, *return\_inner\_stats='deprecated'*, *inner\_stats='deprecated'*, *return\_n\_iter='deprecated'*, *positive\_dict=False*, *positive\_code=False*, *method\_max\_iter=1000*, *tol=0.001*, *max\_no\_improvement=10*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/decomposition/_dict_learning.py#L738)
Solves a dictionary learning matrix factorization problem online.
Finds the best dictionary and the corresponding sparse code for approximating the data matrix X by solving:
```
(U^*, V^*) = argmin 0.5 || X - U V ||_Fro^2 + alpha * || U ||_1,1
(U,V)
with || V_k ||_2 = 1 for all 0 <= k < n_components
```
where V is the dictionary and U is the sparse code. ||.||\_Fro stands for the Frobenius norm and ||.||\_1,1 stands for the entry-wise matrix norm which is the sum of the absolute values of all the entries in the matrix. This is accomplished by repeatedly iterating over mini-batches by slicing the input data.
Read more in the [User Guide](../decomposition#dictionarylearning).
Parameters:
**X**ndarray of shape (n\_samples, n\_features)
Data matrix.
**n\_components**int or None, default=2
Number of dictionary atoms to extract. If None, then `n_components` is set to `n_features`.
**alpha**float, default=1
Sparsity controlling parameter.
**n\_iter**int, default=100
Number of mini-batch iterations to perform.
Deprecated since version 1.1: `n_iter` is deprecated in 1.1 and will be removed in 1.3. Use `max_iter` instead.
**max\_iter**int, default=None
Maximum number of iterations over the complete dataset before stopping independently of any early stopping criterion heuristics. If `max_iter` is not None, `n_iter` is ignored.
New in version 1.1.
**return\_code**bool, default=True
Whether to also return the code U or just the dictionary `V`.
**dict\_init**ndarray of shape (n\_components, n\_features), default=None
Initial values for the dictionary for warm restart scenarios. If `None`, the initial values for the dictionary are created with an SVD decomposition of the data via `randomized_svd`.
**callback**callable, default=None
A callable that gets invoked at the end of each iteration.
**batch\_size**int, default=3
The number of samples to take in each batch.
**verbose**bool, default=False
To control the verbosity of the procedure.
**shuffle**bool, default=True
Whether to shuffle the data before splitting it in batches.
**n\_jobs**int, default=None
Number of parallel jobs to run. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
**method**{‘lars’, ‘cd’}, default=’lars’
* `'lars'`: uses the least angle regression method to solve the lasso problem (`linear_model.lars_path`);
* `'cd'`: uses the coordinate descent method to compute the Lasso solution (`linear_model.Lasso`). Lars will be faster if the estimated components are sparse.
**iter\_offset**int, default=0
Number of previous iterations completed on the dictionary used for initialization.
Deprecated since version 1.1: `iter_offset` serves internal purpose only and will be removed in 1.3.
**random\_state**int, RandomState instance or None, default=None
Used for initializing the dictionary when `dict_init` is not specified, randomly shuffling the data when `shuffle` is set to `True`, and updating the dictionary. Pass an int for reproducible results across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**return\_inner\_stats**bool, default=False
Return the inner statistics A (dictionary covariance) and B (data approximation). Useful to restart the algorithm in an online setting. If `return_inner_stats` is `True`, `return_code` is ignored.
Deprecated since version 1.1: `return_inner_stats` serves internal purpose only and will be removed in 1.3.
**inner\_stats**tuple of (A, B) ndarrays, default=None
Inner sufficient statistics that are kept by the algorithm. Passing them at initialization is useful in online settings, to avoid losing the history of the evolution. `A` `(n_components, n_components)` is the dictionary covariance matrix. `B` `(n_features, n_components)` is the data approximation matrix.
Deprecated since version 1.1: `inner_stats` serves internal purpose only and will be removed in 1.3.
**return\_n\_iter**bool, default=False
Whether or not to return the number of iterations.
Deprecated since version 1.1: `return_n_iter` will be removed in 1.3 and n\_iter will always be returned.
**positive\_dict**bool, default=False
Whether to enforce positivity when finding the dictionary.
New in version 0.20.
**positive\_code**bool, default=False
Whether to enforce positivity when finding the code.
New in version 0.20.
**method\_max\_iter**int, default=1000
Maximum number of iterations to perform when solving the lasso problem.
New in version 0.22.
**tol**float, default=1e-3
Control early stopping based on the norm of the differences in the dictionary between 2 steps. Used only if `max_iter` is not None.
To disable early stopping based on changes in the dictionary, set `tol` to 0.0.
New in version 1.1.
**max\_no\_improvement**int, default=10
Control early stopping based on the consecutive number of mini batches that does not yield an improvement on the smoothed cost function. Used only if `max_iter` is not None.
To disable convergence detection based on cost function, set `max_no_improvement` to None.
New in version 1.1.
Returns:
**code**ndarray of shape (n\_samples, n\_components),
The sparse code (only returned if `return_code=True`).
**dictionary**ndarray of shape (n\_components, n\_features),
The solutions to the dictionary learning problem.
**n\_iter**int
Number of iterations run. Returned only if `return_n_iter` is set to `True`.
See also
[`dict_learning`](sklearn.decomposition.dict_learning#sklearn.decomposition.dict_learning "sklearn.decomposition.dict_learning")
[`DictionaryLearning`](sklearn.decomposition.dictionarylearning#sklearn.decomposition.DictionaryLearning "sklearn.decomposition.DictionaryLearning")
[`MiniBatchDictionaryLearning`](sklearn.decomposition.minibatchdictionarylearning#sklearn.decomposition.MiniBatchDictionaryLearning "sklearn.decomposition.MiniBatchDictionaryLearning")
[`SparsePCA`](sklearn.decomposition.sparsepca#sklearn.decomposition.SparsePCA "sklearn.decomposition.SparsePCA")
[`MiniBatchSparsePCA`](sklearn.decomposition.minibatchsparsepca#sklearn.decomposition.MiniBatchSparsePCA "sklearn.decomposition.MiniBatchSparsePCA")
| programming_docs |
scikit_learn sklearn.datasets.load_svmlight_file sklearn.datasets.load\_svmlight\_file
=====================================
sklearn.datasets.load\_svmlight\_file(*f*, *\**, *n\_features=None*, *dtype=<class 'numpy.float64'>*, *multilabel=False*, *zero\_based='auto'*, *query\_id=False*, *offset=0*, *length=-1*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/datasets/_svmlight_format_io.py#L42)
Load datasets in the svmlight / libsvm format into sparse CSR matrix
This format is a text-based format, with one sample per line. It does not store zero valued features hence is suitable for sparse dataset.
The first element of each line can be used to store a target variable to predict.
This format is used as the default format for both svmlight and the libsvm command line programs.
Parsing a text based source can be expensive. When repeatedly working on the same dataset, it is recommended to wrap this loader with joblib.Memory.cache to store a memmapped backup of the CSR results of the first call and benefit from the near instantaneous loading of memmapped structures for the subsequent calls.
In case the file contains a pairwise preference constraint (known as “qid” in the svmlight format) these are ignored unless the query\_id parameter is set to True. These pairwise preference constraints can be used to constraint the combination of samples when using pairwise loss functions (as is the case in some learning to rank problems) so that only pairs with the same query\_id value are considered.
This implementation is written in Cython and is reasonably fast. However, a faster API-compatible loader is also available at:
<https://github.com/mblondel/svmlight-loader>
Parameters:
**f**str, file-like or int
(Path to) a file to load. If a path ends in “.gz” or “.bz2”, it will be uncompressed on the fly. If an integer is passed, it is assumed to be a file descriptor. A file-like or file descriptor will not be closed by this function. A file-like object must be opened in binary mode.
**n\_features**int, default=None
The number of features to use. If None, it will be inferred. This argument is useful to load several files that are subsets of a bigger sliced dataset: each subset might not have examples of every feature, hence the inferred shape might vary from one slice to another. n\_features is only required if `offset` or `length` are passed a non-default value.
**dtype**numpy data type, default=np.float64
Data type of dataset to be loaded. This will be the data type of the output numpy arrays `X` and `y`.
**multilabel**bool, default=False
Samples may have several labels each (see <https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multilabel.html>)
**zero\_based**bool or “auto”, default=”auto”
Whether column indices in f are zero-based (True) or one-based (False). If column indices are one-based, they are transformed to zero-based to match Python/NumPy conventions. If set to “auto”, a heuristic check is applied to determine this from the file contents. Both kinds of files occur “in the wild”, but they are unfortunately not self-identifying. Using “auto” or True should always be safe when no `offset` or `length` is passed. If `offset` or `length` are passed, the “auto” mode falls back to `zero_based=True` to avoid having the heuristic check yield inconsistent results on different segments of the file.
**query\_id**bool, default=False
If True, will return the query\_id array for each file.
**offset**int, default=0
Ignore the offset first bytes by seeking forward, then discarding the following bytes up until the next new line character.
**length**int, default=-1
If strictly positive, stop reading any new line of data once the position in the file has reached the (offset + length) bytes threshold.
Returns:
**X**scipy.sparse matrix of shape (n\_samples, n\_features)
**y**ndarray of shape (n\_samples,), or, in the multilabel a list of
tuples of length n\_samples.
**query\_id**array of shape (n\_samples,)
query\_id for each sample. Only returned when query\_id is set to True.
See also
[`load_svmlight_files`](sklearn.datasets.load_svmlight_files#sklearn.datasets.load_svmlight_files "sklearn.datasets.load_svmlight_files")
Similar function for loading multiple files in this format, enforcing the same number of features/columns on all of them.
#### Examples
To use joblib.Memory to cache the svmlight file:
```
from joblib import Memory
from .datasets import load_svmlight_file
mem = Memory("./mycache")
@mem.cache
def get_data():
data = load_svmlight_file("mysvmlightfile")
return data[0], data[1]
X, y = get_data()
```
scikit_learn sklearn.decomposition.sparse_encode sklearn.decomposition.sparse\_encode
====================================
sklearn.decomposition.sparse\_encode(*X*, *dictionary*, *\**, *gram=None*, *cov=None*, *algorithm='lasso\_lars'*, *n\_nonzero\_coefs=None*, *alpha=None*, *copy\_cov=True*, *init=None*, *max\_iter=1000*, *n\_jobs=None*, *check\_input=True*, *verbose=0*, *positive=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/decomposition/_dict_learning.py#L227)
Sparse coding.
Each row of the result is the solution to a sparse coding problem. The goal is to find a sparse array `code` such that:
```
X ~= code * dictionary
```
Read more in the [User Guide](../decomposition#sparsecoder).
Parameters:
**X**ndarray of shape (n\_samples, n\_features)
Data matrix.
**dictionary**ndarray of shape (n\_components, n\_features)
The dictionary matrix against which to solve the sparse coding of the data. Some of the algorithms assume normalized rows for meaningful output.
**gram**ndarray of shape (n\_components, n\_components), default=None
Precomputed Gram matrix, `dictionary * dictionary'`.
**cov**ndarray of shape (n\_components, n\_samples), default=None
Precomputed covariance, `dictionary' * X`.
**algorithm**{‘lasso\_lars’, ‘lasso\_cd’, ‘lars’, ‘omp’, ‘threshold’}, default=’lasso\_lars’
The algorithm used:
* `'lars'`: uses the least angle regression method (`linear_model.lars_path`);
* `'lasso_lars'`: uses Lars to compute the Lasso solution;
* `'lasso_cd'`: uses the coordinate descent method to compute the Lasso solution (`linear_model.Lasso`). lasso\_lars will be faster if the estimated components are sparse;
* `'omp'`: uses orthogonal matching pursuit to estimate the sparse solution;
* `'threshold'`: squashes to zero all coefficients less than regularization from the projection `dictionary * data'`.
**n\_nonzero\_coefs**int, default=None
Number of nonzero coefficients to target in each column of the solution. This is only used by `algorithm='lars'` and `algorithm='omp'` and is overridden by `alpha` in the `omp` case. If `None`, then `n_nonzero_coefs=int(n_features / 10)`.
**alpha**float, default=None
If `algorithm='lasso_lars'` or `algorithm='lasso_cd'`, `alpha` is the penalty applied to the L1 norm. If `algorithm='threshold'`, `alpha` is the absolute value of the threshold below which coefficients will be squashed to zero. If `algorithm='omp'`, `alpha` is the tolerance parameter: the value of the reconstruction error targeted. In this case, it overrides `n_nonzero_coefs`. If `None`, default to 1.
**copy\_cov**bool, default=True
Whether to copy the precomputed covariance matrix; if `False`, it may be overwritten.
**init**ndarray of shape (n\_samples, n\_components), default=None
Initialization value of the sparse codes. Only used if `algorithm='lasso_cd'`.
**max\_iter**int, default=1000
Maximum number of iterations to perform if `algorithm='lasso_cd'` or `'lasso_lars'`.
**n\_jobs**int, default=None
Number of parallel jobs to run. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
**check\_input**bool, default=True
If `False`, the input arrays X and dictionary will not be checked.
**verbose**int, default=0
Controls the verbosity; the higher, the more messages.
**positive**bool, default=False
Whether to enforce positivity when finding the encoding.
New in version 0.20.
Returns:
**code**ndarray of shape (n\_samples, n\_components)
The sparse codes.
See also
[`sklearn.linear_model.lars_path`](sklearn.linear_model.lars_path#sklearn.linear_model.lars_path "sklearn.linear_model.lars_path")
Compute Least Angle Regression or Lasso path using LARS algorithm.
[`sklearn.linear_model.orthogonal_mp`](sklearn.linear_model.orthogonal_mp#sklearn.linear_model.orthogonal_mp "sklearn.linear_model.orthogonal_mp")
Solves Orthogonal Matching Pursuit problems.
[`sklearn.linear_model.Lasso`](sklearn.linear_model.lasso#sklearn.linear_model.Lasso "sklearn.linear_model.Lasso")
Train Linear Model with L1 prior as regularizer.
[`SparseCoder`](sklearn.decomposition.sparsecoder#sklearn.decomposition.SparseCoder "sklearn.decomposition.SparseCoder")
Find a sparse representation of data from a fixed precomputed dictionary.
scikit_learn sklearn.dummy.DummyRegressor sklearn.dummy.DummyRegressor
============================
*class*sklearn.dummy.DummyRegressor(*\**, *strategy='mean'*, *constant=None*, *quantile=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/dummy.py#L464)
Regressor that makes predictions using simple rules.
This regressor is useful as a simple baseline to compare with other (real) regressors. Do not use it for real problems.
Read more in the [User Guide](../model_evaluation#dummy-estimators).
New in version 0.13.
Parameters:
**strategy**{“mean”, “median”, “quantile”, “constant”}, default=”mean”
Strategy to use to generate predictions.
* “mean”: always predicts the mean of the training set
* “median”: always predicts the median of the training set
* “quantile”: always predicts a specified quantile of the training set, provided with the quantile parameter.
* “constant”: always predicts a constant value that is provided by the user.
**constant**int or float or array-like of shape (n\_outputs,), default=None
The explicit constant as predicted by the “constant” strategy. This parameter is useful only for the “constant” strategy.
**quantile**float in [0.0, 1.0], default=None
The quantile to predict using the “quantile” strategy. A quantile of 0.5 corresponds to the median, while 0.0 to the minimum and 1.0 to the maximum.
Attributes:
**constant\_**ndarray of shape (1, n\_outputs)
Mean or median or quantile of the training targets or constant value given by the user.
[`n_features_in_`](#sklearn.dummy.DummyRegressor.n_features_in_ "sklearn.dummy.DummyRegressor.n_features_in_")`None`
DEPRECATED: `n_features_in_` is deprecated in 1.0 and will be removed in 1.2.
**n\_outputs\_**int
Number of outputs.
See also
[`DummyClassifier`](sklearn.dummy.dummyclassifier#sklearn.dummy.DummyClassifier "sklearn.dummy.DummyClassifier")
Classifier that makes predictions using simple rules.
#### Examples
```
>>> import numpy as np
>>> from sklearn.dummy import DummyRegressor
>>> X = np.array([1.0, 2.0, 3.0, 4.0])
>>> y = np.array([2.0, 3.0, 5.0, 10.0])
>>> dummy_regr = DummyRegressor(strategy="mean")
>>> dummy_regr.fit(X, y)
DummyRegressor()
>>> dummy_regr.predict(X)
array([5., 5., 5., 5.])
>>> dummy_regr.score(X, y)
0.0
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.dummy.DummyRegressor.fit "sklearn.dummy.DummyRegressor.fit")(X, y[, sample\_weight]) | Fit the random regressor. |
| [`get_params`](#sklearn.dummy.DummyRegressor.get_params "sklearn.dummy.DummyRegressor.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.dummy.DummyRegressor.predict "sklearn.dummy.DummyRegressor.predict")(X[, return\_std]) | Perform classification on test vectors X. |
| [`score`](#sklearn.dummy.DummyRegressor.score "sklearn.dummy.DummyRegressor.score")(X, y[, sample\_weight]) | Return the coefficient of determination R^2 of the prediction. |
| [`set_params`](#sklearn.dummy.DummyRegressor.set_params "sklearn.dummy.DummyRegressor.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/dummy.py#L535)
Fit the random regressor.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training data.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
Target values.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**self**object
Fitted estimator.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
*property*n\_features\_in\_
DEPRECATED: `n_features_in_` is deprecated in 1.0 and will be removed in 1.2.
predict(*X*, *return\_std=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/dummy.py#L624)
Perform classification on test vectors X.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test data.
**return\_std**bool, default=False
Whether to return the standard deviation of posterior prediction. All zeros in this case.
New in version 0.20.
Returns:
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
Predicted target values for X.
**y\_std**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
Standard deviation of predictive distribution of query points.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/dummy.py#L665)
Return the coefficient of determination R^2 of the prediction.
The coefficient R^2 is defined as `(1 - u/v)`, where `u` is the residual sum of squares `((y_true - y_pred) ** 2).sum()` and `v` is the total sum of squares `((y_true - y_true.mean()) ** 2).sum()`. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0.
Parameters:
**X**None or array-like of shape (n\_samples, n\_features)
Test samples. Passing None as test samples gives the same result as passing real test samples, since `DummyRegressor` operates independently of the sampled observations.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True values for X.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
R^2 of `self.predict(X)` wrt. y.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.dummy.DummyRegressor`
---------------------------------------------
[Poisson regression and non-normal loss](../../auto_examples/linear_model/plot_poisson_regression_non_normal_loss#sphx-glr-auto-examples-linear-model-plot-poisson-regression-non-normal-loss-py)
scikit_learn sklearn.manifold.trustworthiness sklearn.manifold.trustworthiness
================================
sklearn.manifold.trustworthiness(*X*, *X\_embedded*, *\**, *n\_neighbors=5*, *metric='euclidean'*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/manifold/_t_sne.py#L445)
Expresses to what extent the local structure is retained.
The trustworthiness is within [0, 1]. It is defined as
\[T(k) = 1 - \frac{2}{nk (2n - 3k - 1)} \sum^n\_{i=1} \sum\_{j \in \mathcal{N}\_{i}^{k}} \max(0, (r(i, j) - k))\] where for each sample i, \(\mathcal{N}\_{i}^{k}\) are its k nearest neighbors in the output space, and every sample j is its \(r(i, j)\)-th nearest neighbor in the input space. In other words, any unexpected nearest neighbors in the output space are penalised in proportion to their rank in the input space.
Parameters:
**X**ndarray of shape (n\_samples, n\_features) or (n\_samples, n\_samples)
If the metric is ‘precomputed’ X must be a square distance matrix. Otherwise it contains a sample per row.
**X\_embedded**ndarray of shape (n\_samples, n\_components)
Embedding of the training data in low-dimensional space.
**n\_neighbors**int, default=5
The number of neighbors that will be considered. Should be fewer than `n_samples / 2` to ensure the trustworthiness to lies within [0, 1], as mentioned in [[1]](#r5831441d8a57-1). An error will be raised otherwise.
**metric**str or callable, default=’euclidean’
Which metric to use for computing pairwise distances between samples from the original input space. If metric is ‘precomputed’, X must be a matrix of pairwise distances or squared distances. Otherwise, for a list of available metrics, see the documentation of argument metric in `sklearn.pairwise.pairwise_distances` and metrics listed in `sklearn.metrics.pairwise.PAIRWISE_DISTANCE_FUNCTIONS`. Note that the “cosine” metric uses [`cosine_distances`](sklearn.metrics.pairwise.cosine_distances#sklearn.metrics.pairwise.cosine_distances "sklearn.metrics.pairwise.cosine_distances").
New in version 0.20.
Returns:
**trustworthiness**float
Trustworthiness of the low-dimensional embedding.
#### References
[[1](#id1)] Jarkko Venna and Samuel Kaski. 2001. Neighborhood Preservation in Nonlinear Projection Methods: An Experimental Study. In Proceedings of the International Conference on Artificial Neural Networks (ICANN ‘01). Springer-Verlag, Berlin, Heidelberg, 485-491.
[2] Laurens van der Maaten. Learning a Parametric Embedding by Preserving Local Structure. Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, PMLR 5:384-391, 2009.
scikit_learn sklearn.utils.validation.has_fit_parameter sklearn.utils.validation.has\_fit\_parameter
============================================
sklearn.utils.validation.has\_fit\_parameter(*estimator*, *parameter*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/utils/validation.py#L1188)
Check whether the estimator’s fit method supports the given parameter.
Parameters:
**estimator**object
An estimator to inspect.
**parameter**str
The searched parameter.
Returns:
**is\_parameter**bool
Whether the parameter was found to be a named parameter of the estimator’s fit method.
#### Examples
```
>>> from sklearn.svm import SVC
>>> from sklearn.utils.validation import has_fit_parameter
>>> has_fit_parameter(SVC(), "sample_weight")
True
```
scikit_learn sklearn.preprocessing.label_binarize sklearn.preprocessing.label\_binarize
=====================================
sklearn.preprocessing.label\_binarize(*y*, *\**, *classes*, *neg\_label=0*, *pos\_label=1*, *sparse\_output=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_label.py#L416)
Binarize labels in a one-vs-all fashion.
Several regression and binary classification algorithms are available in scikit-learn. A simple way to extend these algorithms to the multi-class classification case is to use the so-called one-vs-all scheme.
This function makes it possible to compute this transformation for a fixed set of class labels known ahead of time.
Parameters:
**y**array-like
Sequence of integer labels or multilabel data to encode.
**classes**array-like of shape (n\_classes,)
Uniquely holds the label for each class.
**neg\_label**int, default=0
Value with which negative labels must be encoded.
**pos\_label**int, default=1
Value with which positive labels must be encoded.
**sparse\_output**bool, default=False,
Set to true if output binary array is desired in CSR sparse format.
Returns:
**Y**{ndarray, sparse matrix} of shape (n\_samples, n\_classes)
Shape will be (n\_samples, 1) for binary problems. Sparse matrix will be of CSR format.
See also
[`LabelBinarizer`](sklearn.preprocessing.labelbinarizer#sklearn.preprocessing.LabelBinarizer "sklearn.preprocessing.LabelBinarizer")
Class used to wrap the functionality of label\_binarize and allow for fitting to classes independently of the transform operation.
#### Examples
```
>>> from sklearn.preprocessing import label_binarize
>>> label_binarize([1, 6], classes=[1, 2, 4, 6])
array([[1, 0, 0, 0],
[0, 0, 0, 1]])
```
The class ordering is preserved:
```
>>> label_binarize([1, 6], classes=[1, 6, 4, 2])
array([[1, 0, 0, 0],
[0, 1, 0, 0]])
```
Binary targets transform to a column vector
```
>>> label_binarize(['yes', 'no', 'no', 'yes'], classes=['no', 'yes'])
array([[1],
[0],
[0],
[1]])
```
Examples using `sklearn.preprocessing.label_binarize`
-----------------------------------------------------
[Precision-Recall](../../auto_examples/model_selection/plot_precision_recall#sphx-glr-auto-examples-model-selection-plot-precision-recall-py)
[Receiver Operating Characteristic (ROC)](../../auto_examples/model_selection/plot_roc#sphx-glr-auto-examples-model-selection-plot-roc-py)
| programming_docs |
scikit_learn sklearn.datasets.load_boston sklearn.datasets.load\_boston
=============================
sklearn.datasets.load\_boston(*\**, *return\_X\_y=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/datasets/_base.py#L1173)
DEPRECATED: `load_boston` is deprecated in 1.0 and will be removed in 1.2.
The Boston housing prices dataset has an ethical problem. You can refer to the documentation of this function for further details.
The scikit-learn maintainers therefore strongly discourage the use of this dataset unless the purpose of the code is to study and educate about ethical issues in data science and machine learning.
In this special case, you can fetch the dataset from the original source:
```
import pandas as pd
import numpy as np
data_url = "http://lib.stat.cmu.edu/datasets/boston"
raw_df = pd.read_csv(data_url, sep="\s+", skiprows=22, header=None)
data = np.hstack([raw_df.values[::2, :], raw_df.values[1::2, :2]])
target = raw_df.values[1::2, 2]
```
Alternative datasets include the California housing dataset (i.e. [`fetch_california_housing`](sklearn.datasets.fetch_california_housing#sklearn.datasets.fetch_california_housing "sklearn.datasets.fetch_california_housing")) and the Ames housing dataset. You can load the datasets as follows:
```
from sklearn.datasets import fetch_california_housing
housing = fetch_california_housing()
```
for the California housing dataset and:
```
from sklearn.datasets import fetch_openml
housing = fetch_openml(name="house_prices", as_frame=True)
```
for the Ames housing dataset.
Load and return the Boston house-prices dataset (regression).
| | |
| --- | --- |
| Samples total | 506 |
| Dimensionality | 13 |
| Features | real, positive |
| Targets | real 5. - 50. |
Read more in the [User Guide](https://scikit-learn.org/1.1/datasets/toy_dataset.html#boston-dataset).
Warning
The Boston housing prices dataset has an ethical problem: as investigated in [[1]](#rec2f484fdebe-1), the authors of this dataset engineered a non-invertible variable “B” assuming that racial self-segregation had a positive impact on house prices [[2]](#rec2f484fdebe-2). Furthermore the goal of the research that led to the creation of this dataset was to study the impact of air quality but it did not give adequate demonstration of the validity of this assumption.
The scikit-learn maintainers therefore strongly discourage the use of this dataset unless the purpose of the code is to study and educate about ethical issues in data science and machine learning.
In this special case, you can fetch the dataset from the original source:
```
import pandas as pd # doctest: +SKIP
import numpy as np
data_url = "http://lib.stat.cmu.edu/datasets/boston"
raw_df = pd.read_csv(data_url, sep="\s+", skiprows=22, header=None)
data = np.hstack([raw_df.values[::2, :], raw_df.values[1::2, :2]])
target = raw_df.values[1::2, 2]
```
Alternative datasets include the California housing dataset [[3]](#rec2f484fdebe-3) (i.e. [`fetch_california_housing`](sklearn.datasets.fetch_california_housing#sklearn.datasets.fetch_california_housing "sklearn.datasets.fetch_california_housing")) and Ames housing dataset [[4]](#rec2f484fdebe-4). You can load the datasets as follows:
```
from sklearn.datasets import fetch_california_housing
housing = fetch_california_housing()
```
for the California housing dataset and:
```
from sklearn.datasets import fetch_openml
housing = fetch_openml(name="house_prices", as_frame=True)
```
for the Ames housing dataset.
Parameters:
**return\_X\_y**bool, default=False
If True, returns `(data, target)` instead of a Bunch object. See below for more information about the `data` and `target` object.
New in version 0.18.
Returns:
**data**[`Bunch`](sklearn.utils.bunch#sklearn.utils.Bunch "sklearn.utils.Bunch")
Dictionary-like object, with the following attributes.
datandarray of shape (506, 13)
The data matrix.
targetndarray of shape (506,)
The regression target.
filenamestr
The physical location of boston csv dataset.
New in version 0.20.
DESCRstr
The full description of the dataset.
feature\_namesndarray
The names of features
**(data, target)**tuple if `return_X_y` is True
A tuple of two ndarrays. The first contains a 2D array of shape (506, 13) with each row representing one sample and each column representing the features. The second array of shape (506,) contains the target samples.
New in version 0.18.
#### Notes
Changed in version 0.20: Fixed a wrong data point at [445, 0].
#### References
[[1](#id1)] [Racist data destruction? M Carlisle,](https://medium.com/@docintangible/racist-data-destruction-113e3eff54a8)
[[2](#id2)] [Harrison Jr, David, and Daniel L. Rubinfeld. “Hedonic housing prices and the demand for clean air.” Journal of environmental economics and management 5.1 (1978): 81-102.](https://www.researchgate.net/publication/4974606_Hedonic_housing_prices_and_the_demand_for_clean_air)
[[3](#id3)] [California housing dataset](https://scikit-learn.org/stable/datasets/real_world.html#california-housing-dataset)
[[4](#id4)] [Ames housing dataset](https://www.openml.org/d/42165)
#### Examples
```
>>> import warnings
>>> from sklearn.datasets import load_boston
>>> with warnings.catch_warnings():
... # You should probably not use this dataset.
... warnings.filterwarnings("ignore")
... X, y = load_boston(return_X_y=True)
>>> print(X.shape)
(506, 13)
```
scikit_learn sklearn.metrics.normalized_mutual_info_score sklearn.metrics.normalized\_mutual\_info\_score
===============================================
sklearn.metrics.normalized\_mutual\_info\_score(*labels\_true*, *labels\_pred*, *\**, *average\_method='arithmetic'*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/cluster/_supervised.py#L951)
Normalized Mutual Information between two clusterings.
Normalized Mutual Information (NMI) is a normalization of the Mutual Information (MI) score to scale the results between 0 (no mutual information) and 1 (perfect correlation). In this function, mutual information is normalized by some generalized mean of `H(labels_true)` and `H(labels_pred))`, defined by the `average_method`.
This measure is not adjusted for chance. Therefore [`adjusted_mutual_info_score`](sklearn.metrics.adjusted_mutual_info_score#sklearn.metrics.adjusted_mutual_info_score "sklearn.metrics.adjusted_mutual_info_score") might be preferred.
This metric is independent of the absolute values of the labels: a permutation of the class or cluster label values won’t change the score value in any way.
This metric is furthermore symmetric: switching `label_true` with `label_pred` will return the same score value. This can be useful to measure the agreement of two independent label assignments strategies on the same dataset when the real ground truth is not known.
Read more in the [User Guide](../clustering#mutual-info-score).
Parameters:
**labels\_true**int array, shape = [n\_samples]
A clustering of the data into disjoint subsets.
**labels\_pred**int array-like of shape (n\_samples,)
A clustering of the data into disjoint subsets.
**average\_method**str, default=’arithmetic’
How to compute the normalizer in the denominator. Possible options are ‘min’, ‘geometric’, ‘arithmetic’, and ‘max’.
New in version 0.20.
Changed in version 0.22: The default value of `average_method` changed from ‘geometric’ to ‘arithmetic’.
Returns:
**nmi**float
Score between 0.0 and 1.0 in normalized nats (based on the natural logarithm). 1.0 stands for perfectly complete labeling.
See also
[`v_measure_score`](sklearn.metrics.v_measure_score#sklearn.metrics.v_measure_score "sklearn.metrics.v_measure_score")
V-Measure (NMI with arithmetic mean option).
[`adjusted_rand_score`](sklearn.metrics.adjusted_rand_score#sklearn.metrics.adjusted_rand_score "sklearn.metrics.adjusted_rand_score")
Adjusted Rand Index.
[`adjusted_mutual_info_score`](sklearn.metrics.adjusted_mutual_info_score#sklearn.metrics.adjusted_mutual_info_score "sklearn.metrics.adjusted_mutual_info_score")
Adjusted Mutual Information (adjusted against chance).
#### Examples
Perfect labelings are both homogeneous and complete, hence have score 1.0:
```
>>> from sklearn.metrics.cluster import normalized_mutual_info_score
>>> normalized_mutual_info_score([0, 0, 1, 1], [0, 0, 1, 1])
...
1.0
>>> normalized_mutual_info_score([0, 0, 1, 1], [1, 1, 0, 0])
...
1.0
```
If classes members are completely split across different clusters, the assignment is totally in-complete, hence the NMI is null:
```
>>> normalized_mutual_info_score([0, 0, 0, 0], [0, 1, 2, 3])
...
0.0
```
scikit_learn sklearn.cluster.MeanShift sklearn.cluster.MeanShift
=========================
*class*sklearn.cluster.MeanShift(*\**, *bandwidth=None*, *seeds=None*, *bin\_seeding=False*, *min\_bin\_freq=1*, *cluster\_all=True*, *n\_jobs=None*, *max\_iter=300*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_mean_shift.py#L255)
Mean shift clustering using a flat kernel.
Mean shift clustering aims to discover “blobs” in a smooth density of samples. It is a centroid-based algorithm, which works by updating candidates for centroids to be the mean of the points within a given region. These candidates are then filtered in a post-processing stage to eliminate near-duplicates to form the final set of centroids.
Seeding is performed using a binning technique for scalability.
Read more in the [User Guide](../clustering#mean-shift).
Parameters:
**bandwidth**float, default=None
Bandwidth used in the RBF kernel.
If not given, the bandwidth is estimated using sklearn.cluster.estimate\_bandwidth; see the documentation for that function for hints on scalability (see also the Notes, below).
**seeds**array-like of shape (n\_samples, n\_features), default=None
Seeds used to initialize kernels. If not set, the seeds are calculated by clustering.get\_bin\_seeds with bandwidth as the grid size and default values for other parameters.
**bin\_seeding**bool, default=False
If true, initial kernel locations are not locations of all points, but rather the location of the discretized version of points, where points are binned onto a grid whose coarseness corresponds to the bandwidth. Setting this option to True will speed up the algorithm because fewer seeds will be initialized. The default value is False. Ignored if seeds argument is not None.
**min\_bin\_freq**int, default=1
To speed up the algorithm, accept only those bins with at least min\_bin\_freq points as seeds.
**cluster\_all**bool, default=True
If true, then all points are clustered, even those orphans that are not within any kernel. Orphans are assigned to the nearest kernel. If false, then orphans are given cluster label -1.
**n\_jobs**int, default=None
The number of jobs to use for the computation. This works by computing each of the n\_init runs in parallel.
`None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
**max\_iter**int, default=300
Maximum number of iterations, per seed point before the clustering operation terminates (for that seed point), if has not converged yet.
New in version 0.22.
Attributes:
**cluster\_centers\_**ndarray of shape (n\_clusters, n\_features)
Coordinates of cluster centers.
**labels\_**ndarray of shape (n\_samples,)
Labels of each point.
**n\_iter\_**int
Maximum number of iterations performed on each seed.
New in version 0.22.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`KMeans`](sklearn.cluster.kmeans#sklearn.cluster.KMeans "sklearn.cluster.KMeans")
K-Means clustering.
#### Notes
Scalability:
Because this implementation uses a flat kernel and a Ball Tree to look up members of each kernel, the complexity will tend towards O(T\*n\*log(n)) in lower dimensions, with n the number of samples and T the number of points. In higher dimensions the complexity will tend towards O(T\*n^2).
Scalability can be boosted by using fewer seeds, for example by using a higher value of min\_bin\_freq in the get\_bin\_seeds function.
Note that the estimate\_bandwidth function is much less scalable than the mean shift algorithm and will be the bottleneck if it is used.
#### References
Dorin Comaniciu and Peter Meer, “Mean Shift: A robust approach toward feature space analysis”. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2002. pp. 603-619.
#### Examples
```
>>> from sklearn.cluster import MeanShift
>>> import numpy as np
>>> X = np.array([[1, 1], [2, 1], [1, 0],
... [4, 7], [3, 5], [3, 6]])
>>> clustering = MeanShift(bandwidth=2).fit(X)
>>> clustering.labels_
array([1, 1, 1, 0, 0, 0])
>>> clustering.predict([[0, 0], [5, 5]])
array([1, 0])
>>> clustering
MeanShift(bandwidth=2)
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.cluster.MeanShift.fit "sklearn.cluster.MeanShift.fit")(X[, y]) | Perform clustering. |
| [`fit_predict`](#sklearn.cluster.MeanShift.fit_predict "sklearn.cluster.MeanShift.fit_predict")(X[, y]) | Perform clustering on `X` and returns cluster labels. |
| [`get_params`](#sklearn.cluster.MeanShift.get_params "sklearn.cluster.MeanShift.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.cluster.MeanShift.predict "sklearn.cluster.MeanShift.predict")(X) | Predict the closest cluster each sample in X belongs to. |
| [`set_params`](#sklearn.cluster.MeanShift.set_params "sklearn.cluster.MeanShift.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_mean_shift.py#L401)
Perform clustering.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Samples to cluster.
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**self**object
Fitted instance.
fit\_predict(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L732)
Perform clustering on `X` and returns cluster labels.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input data.
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**labels**ndarray of shape (n\_samples,), dtype=np.int64
Cluster labels.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_mean_shift.py#L498)
Predict the closest cluster each sample in X belongs to.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
New data to predict.
Returns:
**labels**ndarray of shape (n\_samples,)
Index of the cluster each sample belongs to.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.cluster.MeanShift`
------------------------------------------
[A demo of the mean-shift clustering algorithm](../../auto_examples/cluster/plot_mean_shift#sphx-glr-auto-examples-cluster-plot-mean-shift-py)
[Comparing different clustering algorithms on toy datasets](../../auto_examples/cluster/plot_cluster_comparison#sphx-glr-auto-examples-cluster-plot-cluster-comparison-py)
scikit_learn sklearn.linear_model.LarsCV sklearn.linear\_model.LarsCV
============================
*class*sklearn.linear\_model.LarsCV(*\**, *fit\_intercept=True*, *verbose=False*, *max\_iter=500*, *normalize='deprecated'*, *precompute='auto'*, *cv=None*, *max\_n\_alphas=1000*, *n\_jobs=None*, *eps=2.220446049250313e-16*, *copy\_X=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_least_angle.py#L1472)
Cross-validated Least Angle Regression model.
See glossary entry for [cross-validation estimator](https://scikit-learn.org/1.1/glossary.html#term-cross-validation-estimator).
Read more in the [User Guide](../linear_model#least-angle-regression).
Parameters:
**fit\_intercept**bool, default=True
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
**verbose**bool or int, default=False
Sets the verbosity amount.
**max\_iter**int, default=500
Maximum number of iterations to perform.
**normalize**bool, default=True
This parameter is ignored when `fit_intercept` is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use [`StandardScaler`](sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler") before calling `fit` on an estimator with `normalize=False`.
Deprecated since version 1.0: `normalize` was deprecated in version 1.0. It will default to False in 1.2 and be removed in 1.4.
**precompute**bool, ‘auto’ or array-like , default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to `'auto'` let us decide. The Gram matrix cannot be passed as argument since we will use only subsets of X.
**cv**int, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are:
* None, to use the default 5-fold cross-validation,
* integer, to specify the number of folds.
* [CV splitter](https://scikit-learn.org/1.1/glossary.html#term-CV-splitter),
* An iterable yielding (train, test) splits as arrays of indices.
For integer/None inputs, `KFold` is used.
Refer [User Guide](../cross_validation#cross-validation) for the various cross-validation strategies that can be used here.
Changed in version 0.22: `cv` default value if None changed from 3-fold to 5-fold.
**max\_n\_alphas**int, default=1000
The maximum number of points on the path used to compute the residuals in the cross-validation.
**n\_jobs**int or None, default=None
Number of CPUs to use during the cross validation. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
**eps**float, default=np.finfo(float).eps
The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the `tol` parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.
**copy\_X**bool, default=True
If `True`, X will be copied; else, it may be overwritten.
Attributes:
**active\_**list of length n\_alphas or list of such lists
Indices of active variables at the end of the path. If this is a list of lists, the outer list length is `n_targets`.
**coef\_**array-like of shape (n\_features,)
parameter vector (w in the formulation formula)
**intercept\_**float
independent term in decision function
**coef\_path\_**array-like of shape (n\_features, n\_alphas)
the varying values of the coefficients along the path
**alpha\_**float
the estimated regularization parameter alpha
**alphas\_**array-like of shape (n\_alphas,)
the different values of alpha along the path
**cv\_alphas\_**array-like of shape (n\_cv\_alphas,)
all the values of alpha along the path for the different folds
**mse\_path\_**array-like of shape (n\_folds, n\_cv\_alphas)
the mean square error on left-out for each fold along the path (alpha values given by `cv_alphas`)
**n\_iter\_**array-like or int
the number of iterations run by Lars with the optimal alpha.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`lars_path`](sklearn.linear_model.lars_path#sklearn.linear_model.lars_path "sklearn.linear_model.lars_path")
Compute Least Angle Regression or Lasso path using LARS algorithm.
[`lasso_path`](sklearn.linear_model.lasso_path#sklearn.linear_model.lasso_path "sklearn.linear_model.lasso_path")
Compute Lasso path with coordinate descent.
[`Lasso`](sklearn.linear_model.lasso#sklearn.linear_model.Lasso "sklearn.linear_model.Lasso")
Linear Model trained with L1 prior as regularizer (aka the Lasso).
[`LassoCV`](sklearn.linear_model.lassocv#sklearn.linear_model.LassoCV "sklearn.linear_model.LassoCV")
Lasso linear model with iterative fitting along a regularization path.
[`LassoLars`](sklearn.linear_model.lassolars#sklearn.linear_model.LassoLars "sklearn.linear_model.LassoLars")
Lasso model fit with Least Angle Regression a.k.a. Lars.
[`LassoLarsIC`](sklearn.linear_model.lassolarsic#sklearn.linear_model.LassoLarsIC "sklearn.linear_model.LassoLarsIC")
Lasso model fit with Lars using BIC or AIC for model selection.
[`sklearn.decomposition.sparse_encode`](sklearn.decomposition.sparse_encode#sklearn.decomposition.sparse_encode "sklearn.decomposition.sparse_encode")
Sparse coding.
#### Notes
In `fit`, once the best parameter `alpha` is found through cross-validation, the model is fit again using the entire training set.
#### Examples
```
>>> from sklearn.linear_model import LarsCV
>>> from sklearn.datasets import make_regression
>>> X, y = make_regression(n_samples=200, noise=4.0, random_state=0)
>>> reg = LarsCV(cv=5, normalize=False).fit(X, y)
>>> reg.score(X, y)
0.9996...
>>> reg.alpha_
0.2961...
>>> reg.predict(X[:1,])
array([154.3996...])
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.linear_model.LarsCV.fit "sklearn.linear_model.LarsCV.fit")(X, y) | Fit the model using X, y as training data. |
| [`get_params`](#sklearn.linear_model.LarsCV.get_params "sklearn.linear_model.LarsCV.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.linear_model.LarsCV.predict "sklearn.linear_model.LarsCV.predict")(X) | Predict using the linear model. |
| [`score`](#sklearn.linear_model.LarsCV.score "sklearn.linear_model.LarsCV.score")(X, y[, sample\_weight]) | Return the coefficient of determination of the prediction. |
| [`set_params`](#sklearn.linear_model.LarsCV.set_params "sklearn.linear_model.LarsCV.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_least_angle.py#L1655)
Fit the model using X, y as training data.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training data.
**y**array-like of shape (n\_samples,)
Target values.
Returns:
**self**object
Returns an instance of self.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L372)
Predict using the linear model.
Parameters:
**X**array-like or sparse matrix, shape (n\_samples, n\_features)
Samples.
Returns:
**C**array, shape (n\_samples,)
Returns predicted values.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L677)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares `((y_true - y_pred)** 2).sum()` and \(v\) is the total sum of squares `((y_true - y_true.mean()) ** 2).sum()`. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of `y`, disregarding the input features, would get a \(R^2\) score of 0.0.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape `(n_samples, n_samples_fitted)`, where `n_samples_fitted` is the number of samples used in the fitting for the estimator.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True values for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
\(R^2\) of `self.predict(X)` wrt. `y`.
#### Notes
The \(R^2\) score used when calling `score` on a regressor uses `multioutput='uniform_average'` from version 0.23 to keep consistent with default value of [`r2_score`](sklearn.metrics.r2_score#sklearn.metrics.r2_score "sklearn.metrics.r2_score"). This influences the `score` method of all the multioutput regressors (except for [`MultiOutputRegressor`](sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor")).
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
| programming_docs |
scikit_learn sklearn.datasets.fetch_rcv1 sklearn.datasets.fetch\_rcv1
============================
sklearn.datasets.fetch\_rcv1(*\**, *data\_home=None*, *subset='all'*, *download\_if\_missing=True*, *random\_state=None*, *shuffle=False*, *return\_X\_y=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/datasets/_rcv1.py#L79)
Load the RCV1 multilabel dataset (classification).
Download it if necessary.
Version: RCV1-v2, vectors, full sets, topics multilabels.
| | |
| --- | --- |
| Classes | 103 |
| Samples total | 804414 |
| Dimensionality | 47236 |
| Features | real, between 0 and 1 |
Read more in the [User Guide](https://scikit-learn.org/1.1/datasets/real_world.html#rcv1-dataset).
New in version 0.17.
Parameters:
**data\_home**str, default=None
Specify another download and cache folder for the datasets. By default all scikit-learn data is stored in ‘~/scikit\_learn\_data’ subfolders.
**subset**{‘train’, ‘test’, ‘all’}, default=’all’
Select the dataset to load: ‘train’ for the training set (23149 samples), ‘test’ for the test set (781265 samples), ‘all’ for both, with the training samples first if shuffle is False. This follows the official LYRL2004 chronological split.
**download\_if\_missing**bool, default=True
If False, raise a IOError if the data is not locally available instead of trying to download the data from the source site.
**random\_state**int, RandomState instance or None, default=None
Determines random number generation for dataset shuffling. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**shuffle**bool, default=False
Whether to shuffle dataset.
**return\_X\_y**bool, default=False
If True, returns `(dataset.data, dataset.target)` instead of a Bunch object. See below for more information about the `dataset.data` and `dataset.target` object.
New in version 0.20.
Returns:
**dataset**[`Bunch`](sklearn.utils.bunch#sklearn.utils.Bunch "sklearn.utils.Bunch")
Dictionary-like object. Returned only if `return_X_y` is False. `dataset` has the following attributes:
* datasparse matrix of shape (804414, 47236), dtype=np.float64
The array has 0.16% of non zero values. Will be of CSR format.
* targetsparse matrix of shape (804414, 103), dtype=np.uint8
Each sample has a value of 1 in its categories, and 0 in others. The array has 3.15% of non zero values. Will be of CSR format.
* sample\_idndarray of shape (804414,), dtype=np.uint32,
Identification number of each sample, as ordered in dataset.data.
* target\_namesndarray of shape (103,), dtype=object
Names of each target (RCV1 topics), as ordered in dataset.target.
* DESCRstr
Description of the RCV1 dataset.
**(data, target)**tuple
A tuple consisting of `dataset.data` and `dataset.target`, as described above. Returned only if `return_X_y` is True.
New in version 0.20.
scikit_learn sklearn.metrics.pairwise.euclidean_distances sklearn.metrics.pairwise.euclidean\_distances
=============================================
sklearn.metrics.pairwise.euclidean\_distances(*X*, *Y=None*, *\**, *Y\_norm\_squared=None*, *squared=False*, *X\_norm\_squared=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/pairwise.py#L224)
Compute the distance matrix between each pair from a vector array X and Y.
For efficiency reasons, the euclidean distance between a pair of row vector x and y is computed as:
```
dist(x, y) = sqrt(dot(x, x) - 2 * dot(x, y) + dot(y, y))
```
This formulation has two advantages over other ways of computing distances. First, it is computationally efficient when dealing with sparse data. Second, if one argument varies but the other remains unchanged, then `dot(x, x)` and/or `dot(y, y)` can be pre-computed.
However, this is not the most precise way of doing this computation, because this equation potentially suffers from “catastrophic cancellation”. Also, the distance matrix returned by this function may not be exactly symmetric as required by, e.g., `scipy.spatial.distance` functions.
Read more in the [User Guide](../metrics#metrics).
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples\_X, n\_features)
An array where each row is a sample and each column is a feature.
**Y**{array-like, sparse matrix} of shape (n\_samples\_Y, n\_features), default=None
An array where each row is a sample and each column is a feature. If `None`, method uses `Y=X`.
**Y\_norm\_squared**array-like of shape (n\_samples\_Y,) or (n\_samples\_Y, 1) or (1, n\_samples\_Y), default=None
Pre-computed dot-products of vectors in Y (e.g., `(Y**2).sum(axis=1)`) May be ignored in some cases, see the note below.
**squared**bool, default=False
Return squared Euclidean distances.
**X\_norm\_squared**array-like of shape (n\_samples\_X,) or (n\_samples\_X, 1) or (1, n\_samples\_X), default=None
Pre-computed dot-products of vectors in X (e.g., `(X**2).sum(axis=1)`) May be ignored in some cases, see the note below.
Returns:
**distances**ndarray of shape (n\_samples\_X, n\_samples\_Y)
Returns the distances between the row vectors of `X` and the row vectors of `Y`.
See also
[`paired_distances`](sklearn.metrics.pairwise.paired_distances#sklearn.metrics.pairwise.paired_distances "sklearn.metrics.pairwise.paired_distances")
Distances betweens pairs of elements of X and Y.
#### Notes
To achieve a better accuracy, `X_norm_squared` and `Y_norm_squared` may be unused if they are passed as `np.float32`.
#### Examples
```
>>> from sklearn.metrics.pairwise import euclidean_distances
>>> X = [[0, 1], [1, 1]]
>>> # distance between rows of X
>>> euclidean_distances(X, X)
array([[0., 1.],
[1., 0.]])
>>> # get distance to origin
>>> euclidean_distances(X, [[0, 0]])
array([[1. ],
[1.41421356]])
```
scikit_learn sklearn.metrics.roc_curve sklearn.metrics.roc\_curve
==========================
sklearn.metrics.roc\_curve(*y\_true*, *y\_score*, *\**, *pos\_label=None*, *sample\_weight=None*, *drop\_intermediate=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/_ranking.py#L892)
Compute Receiver operating characteristic (ROC).
Note: this implementation is restricted to the binary classification task.
Read more in the [User Guide](../model_evaluation#roc-metrics).
Parameters:
**y\_true**ndarray of shape (n\_samples,)
True binary labels. If labels are not either {-1, 1} or {0, 1}, then pos\_label should be explicitly given.
**y\_score**ndarray of shape (n\_samples,)
Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by “decision\_function” on some classifiers).
**pos\_label**int or str, default=None
The label of the positive class. When `pos_label=None`, if `y_true` is in {-1, 1} or {0, 1}, `pos_label` is set to 1, otherwise an error will be raised.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
**drop\_intermediate**bool, default=True
Whether to drop some suboptimal thresholds which would not appear on a plotted ROC curve. This is useful in order to create lighter ROC curves.
New in version 0.17: parameter *drop\_intermediate*.
Returns:
**fpr**ndarray of shape (>2,)
Increasing false positive rates such that element i is the false positive rate of predictions with score >= `thresholds[i]`.
**tpr**ndarray of shape (>2,)
Increasing true positive rates such that element `i` is the true positive rate of predictions with score >= `thresholds[i]`.
**thresholds**ndarray of shape = (n\_thresholds,)
Decreasing thresholds on the decision function used to compute fpr and tpr. `thresholds[0]` represents no instances being predicted and is arbitrarily set to `max(y_score) + 1`.
See also
[`RocCurveDisplay.from_estimator`](sklearn.metrics.roccurvedisplay#sklearn.metrics.RocCurveDisplay.from_estimator "sklearn.metrics.RocCurveDisplay.from_estimator")
Plot Receiver Operating Characteristic (ROC) curve given an estimator and some data.
[`RocCurveDisplay.from_predictions`](sklearn.metrics.roccurvedisplay#sklearn.metrics.RocCurveDisplay.from_predictions "sklearn.metrics.RocCurveDisplay.from_predictions")
Plot Receiver Operating Characteristic (ROC) curve given the true and predicted values.
[`det_curve`](sklearn.metrics.det_curve#sklearn.metrics.det_curve "sklearn.metrics.det_curve")
Compute error rates for different probability thresholds.
[`roc_auc_score`](sklearn.metrics.roc_auc_score#sklearn.metrics.roc_auc_score "sklearn.metrics.roc_auc_score")
Compute the area under the ROC curve.
#### Notes
Since the thresholds are sorted from low to high values, they are reversed upon returning them to ensure they correspond to both `fpr` and `tpr`, which are sorted in reversed order during their calculation.
#### References
[1] [Wikipedia entry for the Receiver operating characteristic](https://en.wikipedia.org/wiki/Receiver_operating_characteristic)
[2] Fawcett T. An introduction to ROC analysis[J]. Pattern Recognition Letters, 2006, 27(8):861-874.
#### Examples
```
>>> import numpy as np
>>> from sklearn import metrics
>>> y = np.array([1, 1, 2, 2])
>>> scores = np.array([0.1, 0.4, 0.35, 0.8])
>>> fpr, tpr, thresholds = metrics.roc_curve(y, scores, pos_label=2)
>>> fpr
array([0. , 0. , 0.5, 0.5, 1. ])
>>> tpr
array([0. , 0.5, 0.5, 1. , 1. ])
>>> thresholds
array([1.8 , 0.8 , 0.4 , 0.35, 0.1 ])
```
Examples using `sklearn.metrics.roc_curve`
------------------------------------------
[Species distribution modeling](../../auto_examples/applications/plot_species_distribution_modeling#sphx-glr-auto-examples-applications-plot-species-distribution-modeling-py)
[Visualizations with Display Objects](../../auto_examples/miscellaneous/plot_display_object_visualization#sphx-glr-auto-examples-miscellaneous-plot-display-object-visualization-py)
[Detection error tradeoff (DET) curve](../../auto_examples/model_selection/plot_det#sphx-glr-auto-examples-model-selection-plot-det-py)
[Receiver Operating Characteristic (ROC)](../../auto_examples/model_selection/plot_roc#sphx-glr-auto-examples-model-selection-plot-roc-py)
scikit_learn sklearn.inspection.permutation_importance sklearn.inspection.permutation\_importance
==========================================
sklearn.inspection.permutation\_importance(*estimator*, *X*, *y*, *\**, *scoring=None*, *n\_repeats=5*, *n\_jobs=None*, *random\_state=None*, *sample\_weight=None*, *max\_samples=1.0*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/inspection/_permutation_importance.py#L103)
Permutation importance for feature evaluation [[BRE]](#rd9e56ef97513-bre).
The [estimator](https://scikit-learn.org/1.1/glossary.html#term-estimator) is required to be a fitted estimator. `X` can be the data set used to train the estimator or a hold-out set. The permutation importance of a feature is calculated as follows. First, a baseline metric, defined by [scoring](https://scikit-learn.org/1.1/glossary.html#term-scoring), is evaluated on a (potentially different) dataset defined by the `X`. Next, a feature column from the validation set is permuted and the metric is evaluated again. The permutation importance is defined to be the difference between the baseline metric and metric from permutating the feature column.
Read more in the [User Guide](../permutation_importance#permutation-importance).
Parameters:
**estimator**object
An estimator that has already been [fitted](https://scikit-learn.org/1.1/glossary.html#term-fitted) and is compatible with [scorer](https://scikit-learn.org/1.1/glossary.html#term-scorer).
**X**ndarray or DataFrame, shape (n\_samples, n\_features)
Data on which permutation importance will be computed.
**y**array-like or None, shape (n\_samples, ) or (n\_samples, n\_classes)
Targets for supervised or `None` for unsupervised.
**scoring**str, callable, list, tuple, or dict, default=None
Scorer to use. If `scoring` represents a single score, one can use:
* a single string (see [The scoring parameter: defining model evaluation rules](../model_evaluation#scoring-parameter));
* a callable (see [Defining your scoring strategy from metric functions](../model_evaluation#scoring)) that returns a single value.
If `scoring` represents multiple scores, one can use:
* a list or tuple of unique strings;
* a callable returning a dictionary where the keys are the metric names and the values are the metric scores;
* a dictionary with metric names as keys and callables a values.
Passing multiple scores to `scoring` is more efficient than calling `permutation_importance` for each of the scores as it reuses predictions to avoid redundant computation.
If None, the estimator’s default scorer is used.
**n\_repeats**int, default=5
Number of times to permute a feature.
**n\_jobs**int or None, default=None
Number of jobs to run in parallel. The computation is done by computing permutation score for each columns and parallelized over the columns. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
**random\_state**int, RandomState instance, default=None
Pseudo-random number generator to control the permutations of each feature. Pass an int to get reproducible results across function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights used in scoring.
New in version 0.24.
**max\_samples**int or float, default=1.0
The number of samples to draw from X to compute feature importance in each repeat (without replacement).
* If int, then draw `max_samples` samples.
* If float, then draw `max_samples * X.shape[0]` samples.
* If `max_samples` is equal to `1.0` or `X.shape[0]`, all samples will be used.
While using this option may provide less accurate importance estimates, it keeps the method tractable when evaluating feature importance on large datasets. In combination with `n_repeats`, this allows to control the computational speed vs statistical accuracy trade-off of this method.
New in version 1.0.
Returns:
**result**[`Bunch`](sklearn.utils.bunch#sklearn.utils.Bunch "sklearn.utils.Bunch") or dict of such instances
Dictionary-like object, with the following attributes.
importances\_meanndarray of shape (n\_features, )
Mean of feature importance over `n_repeats`.
importances\_stdndarray of shape (n\_features, )
Standard deviation over `n_repeats`.
importancesndarray of shape (n\_features, n\_repeats)
Raw permutation importance scores.
If there are multiple scoring metrics in the scoring parameter `result` is a dict with scorer names as keys (e.g. ‘roc\_auc’) and `Bunch` objects like above as values.
#### References
[[BRE](#id1)] [L. Breiman, “Random Forests”, Machine Learning, 45(1), 5-32, 2001.](https://doi.org/10.1023/A:1010933404324)
#### Examples
```
>>> from sklearn.linear_model import LogisticRegression
>>> from sklearn.inspection import permutation_importance
>>> X = [[1, 9, 9],[1, 9, 9],[1, 9, 9],
... [0, 9, 9],[0, 9, 9],[0, 9, 9]]
>>> y = [1, 1, 1, 0, 0, 0]
>>> clf = LogisticRegression().fit(X, y)
>>> result = permutation_importance(clf, X, y, n_repeats=10,
... random_state=0)
>>> result.importances_mean
array([0.4666..., 0. , 0. ])
>>> result.importances_std
array([0.2211..., 0. , 0. ])
```
Examples using `sklearn.inspection.permutation_importance`
----------------------------------------------------------
[Release Highlights for scikit-learn 0.22](../../auto_examples/release_highlights/plot_release_highlights_0_22_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-0-22-0-py)
[Feature importances with a forest of trees](../../auto_examples/ensemble/plot_forest_importances#sphx-glr-auto-examples-ensemble-plot-forest-importances-py)
[Gradient Boosting regression](../../auto_examples/ensemble/plot_gradient_boosting_regression#sphx-glr-auto-examples-ensemble-plot-gradient-boosting-regression-py)
[Pixel importances with a parallel forest of trees](../../auto_examples/ensemble/plot_forest_importances_faces#sphx-glr-auto-examples-ensemble-plot-forest-importances-faces-py)
[Permutation Importance vs Random Forest Feature Importance (MDI)](../../auto_examples/inspection/plot_permutation_importance#sphx-glr-auto-examples-inspection-plot-permutation-importance-py)
[Permutation Importance with Multicollinear or Correlated Features](../../auto_examples/inspection/plot_permutation_importance_multicollinear#sphx-glr-auto-examples-inspection-plot-permutation-importance-multicollinear-py)
scikit_learn sklearn.preprocessing.StandardScaler sklearn.preprocessing.StandardScaler
====================================
*class*sklearn.preprocessing.StandardScaler(*\**, *copy=True*, *with\_mean=True*, *with\_std=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_data.py#L635)
Standardize features by removing the mean and scaling to unit variance.
The standard score of a sample `x` is calculated as:
z = (x - u) / s
where `u` is the mean of the training samples or zero if `with_mean=False`, and `s` is the standard deviation of the training samples or one if `with_std=False`.
Centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. Mean and standard deviation are then stored to be used on later data using [`transform`](#sklearn.preprocessing.StandardScaler.transform "sklearn.preprocessing.StandardScaler.transform").
Standardization of a dataset is a common requirement for many machine learning estimators: they might behave badly if the individual features do not more or less look like standard normally distributed data (e.g. Gaussian with 0 mean and unit variance).
For instance many elements used in the objective function of a learning algorithm (such as the RBF kernel of Support Vector Machines or the L1 and L2 regularizers of linear models) assume that all features are centered around 0 and have variance in the same order. If a feature has a variance that is orders of magnitude larger than others, it might dominate the objective function and make the estimator unable to learn from other features correctly as expected.
This scaler can also be applied to sparse CSR or CSC matrices by passing `with_mean=False` to avoid breaking the sparsity structure of the data.
Read more in the [User Guide](../preprocessing#preprocessing-scaler).
Parameters:
**copy**bool, default=True
If False, try to avoid a copy and do inplace scaling instead. This is not guaranteed to always work inplace; e.g. if the data is not a NumPy array or scipy.sparse CSR matrix, a copy may still be returned.
**with\_mean**bool, default=True
If True, center the data before scaling. This does not work (and will raise an exception) when attempted on sparse matrices, because centering them entails building a dense matrix which in common use cases is likely to be too large to fit in memory.
**with\_std**bool, default=True
If True, scale the data to unit variance (or equivalently, unit standard deviation).
Attributes:
**scale\_**ndarray of shape (n\_features,) or None
Per feature relative scaling of the data to achieve zero mean and unit variance. Generally this is calculated using `np.sqrt(var_)`. If a variance is zero, we can’t achieve unit variance, and the data is left as-is, giving a scaling factor of 1. `scale_` is equal to `None` when `with_std=False`.
New in version 0.17: *scale\_*
**mean\_**ndarray of shape (n\_features,) or None
The mean value for each feature in the training set. Equal to `None` when `with_mean=False`.
**var\_**ndarray of shape (n\_features,) or None
The variance for each feature in the training set. Used to compute `scale_`. Equal to `None` when `with_std=False`.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
**n\_samples\_seen\_**int or ndarray of shape (n\_features,)
The number of samples processed by the estimator for each feature. If there are no missing samples, the `n_samples_seen` will be an integer, otherwise it will be an array of dtype int. If `sample_weights` are used it will be a float (if no missing data) or an array of dtype float that sums the weights seen so far. Will be reset on new calls to fit, but increments across `partial_fit` calls.
See also
[`scale`](sklearn.preprocessing.scale#sklearn.preprocessing.scale "sklearn.preprocessing.scale")
Equivalent function without the estimator API.
[`PCA`](sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA")
Further removes the linear correlation across features with ‘whiten=True’.
#### Notes
NaNs are treated as missing values: disregarded in fit, and maintained in transform.
We use a biased estimator for the standard deviation, equivalent to `numpy.std(x, ddof=0)`. Note that the choice of `ddof` is unlikely to affect model performance.
For a comparison of the different scalers, transformers, and normalizers, see [examples/preprocessing/plot\_all\_scaling.py](../../auto_examples/preprocessing/plot_all_scaling#sphx-glr-auto-examples-preprocessing-plot-all-scaling-py).
#### Examples
```
>>> from sklearn.preprocessing import StandardScaler
>>> data = [[0, 0], [0, 0], [1, 1], [1, 1]]
>>> scaler = StandardScaler()
>>> print(scaler.fit(data))
StandardScaler()
>>> print(scaler.mean_)
[0.5 0.5]
>>> print(scaler.transform(data))
[[-1. -1.]
[-1. -1.]
[ 1. 1.]
[ 1. 1.]]
>>> print(scaler.transform([[2, 2]]))
[[3. 3.]]
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.preprocessing.StandardScaler.fit "sklearn.preprocessing.StandardScaler.fit")(X[, y, sample\_weight]) | Compute the mean and std to be used for later scaling. |
| [`fit_transform`](#sklearn.preprocessing.StandardScaler.fit_transform "sklearn.preprocessing.StandardScaler.fit_transform")(X[, y]) | Fit to data, then transform it. |
| [`get_feature_names_out`](#sklearn.preprocessing.StandardScaler.get_feature_names_out "sklearn.preprocessing.StandardScaler.get_feature_names_out")([input\_features]) | Get output feature names for transformation. |
| [`get_params`](#sklearn.preprocessing.StandardScaler.get_params "sklearn.preprocessing.StandardScaler.get_params")([deep]) | Get parameters for this estimator. |
| [`inverse_transform`](#sklearn.preprocessing.StandardScaler.inverse_transform "sklearn.preprocessing.StandardScaler.inverse_transform")(X[, copy]) | Scale back the data to the original representation. |
| [`partial_fit`](#sklearn.preprocessing.StandardScaler.partial_fit "sklearn.preprocessing.StandardScaler.partial_fit")(X[, y, sample\_weight]) | Online computation of mean and std on X for later scaling. |
| [`set_params`](#sklearn.preprocessing.StandardScaler.set_params "sklearn.preprocessing.StandardScaler.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.preprocessing.StandardScaler.transform "sklearn.preprocessing.StandardScaler.transform")(X[, copy]) | Perform standardization by centering and scaling. |
fit(*X*, *y=None*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_data.py#L784)
Compute the mean and std to be used for later scaling.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The data used to compute the mean and standard deviation used for later scaling along the features axis.
**y**None
Ignored.
**sample\_weight**array-like of shape (n\_samples,), default=None
Individual weights for each sample.
New in version 0.24: parameter *sample\_weight* support to StandardScaler.
Returns:
**self**object
Fitted scaler.
fit\_transform(*X*, *y=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L839)
Fit to data, then transform it.
Fits transformer to `X` and `y` with optional parameters `fit_params` and returns a transformed version of `X`.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs), default=None
Target values (None for unsupervised transformations).
**\*\*fit\_params**dict
Additional fit parameters.
Returns:
**X\_new**ndarray array of shape (n\_samples, n\_features\_new)
Transformed array.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L880)
Get output feature names for transformation.
Parameters:
**input\_features**array-like of str or None, default=None
Input features.
* If `input_features` is `None`, then `feature_names_in_` is used as feature names in. If `feature_names_in_` is not defined, then the following input feature names are generated: `["x0", "x1", ..., "x(n_features_in_ - 1)"]`.
* If `input_features` is an array-like, then `input_features` must match `feature_names_in_` if `feature_names_in_` is defined.
Returns:
**feature\_names\_out**ndarray of str objects
Same as input features.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
inverse\_transform(*X*, *copy=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_data.py#L999)
Scale back the data to the original representation.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The data used to scale along the features axis.
**copy**bool, default=None
Copy the input X or not.
Returns:
**X\_tr**{ndarray, sparse matrix} of shape (n\_samples, n\_features)
Transformed array.
partial\_fit(*X*, *y=None*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_data.py#L811)
Online computation of mean and std on X for later scaling.
All of X is processed as a single batch. This is intended for cases when [`fit`](#sklearn.preprocessing.StandardScaler.fit "sklearn.preprocessing.StandardScaler.fit") is not feasible due to very large number of `n_samples` or because X is read from a continuous stream.
The algorithm for incremental mean and std is given in Equation 1.5a,b in Chan, Tony F., Gene H. Golub, and Randall J. LeVeque. “Algorithms for computing the sample variance: Analysis and recommendations.” The American Statistician 37.3 (1983): 242-247:
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The data used to compute the mean and standard deviation used for later scaling along the features axis.
**y**None
Ignored.
**sample\_weight**array-like of shape (n\_samples,), default=None
Individual weights for each sample.
New in version 0.24: parameter *sample\_weight* support to StandardScaler.
Returns:
**self**object
Fitted scaler.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*, *copy=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_data.py#L957)
Perform standardization by centering and scaling.
Parameters:
**X**{array-like, sparse matrix of shape (n\_samples, n\_features)
The data used to scale along the features axis.
**copy**bool, default=None
Copy the input X or not.
Returns:
**X\_tr**{ndarray, sparse matrix} of shape (n\_samples, n\_features)
Transformed array.
Examples using `sklearn.preprocessing.StandardScaler`
-----------------------------------------------------
[Release Highlights for scikit-learn 1.1](../../auto_examples/release_highlights/plot_release_highlights_1_1_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-1-1-0-py)
[Release Highlights for scikit-learn 1.0](../../auto_examples/release_highlights/plot_release_highlights_1_0_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-1-0-0-py)
[Release Highlights for scikit-learn 0.23](../../auto_examples/release_highlights/plot_release_highlights_0_23_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-0-23-0-py)
[Release Highlights for scikit-learn 0.22](../../auto_examples/release_highlights/plot_release_highlights_0_22_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-0-22-0-py)
[Classifier comparison](../../auto_examples/classification/plot_classifier_comparison#sphx-glr-auto-examples-classification-plot-classifier-comparison-py)
[A demo of K-Means clustering on the handwritten digits data](../../auto_examples/cluster/plot_kmeans_digits#sphx-glr-auto-examples-cluster-plot-kmeans-digits-py)
[Comparing different clustering algorithms on toy datasets](../../auto_examples/cluster/plot_cluster_comparison#sphx-glr-auto-examples-cluster-plot-cluster-comparison-py)
[Comparing different hierarchical linkage methods on toy datasets](../../auto_examples/cluster/plot_linkage_comparison#sphx-glr-auto-examples-cluster-plot-linkage-comparison-py)
[Demo of DBSCAN clustering algorithm](../../auto_examples/cluster/plot_dbscan#sphx-glr-auto-examples-cluster-plot-dbscan-py)
[Principal Component Regression vs Partial Least Squares Regression](../../auto_examples/cross_decomposition/plot_pcr_vs_pls#sphx-glr-auto-examples-cross-decomposition-plot-pcr-vs-pls-py)
[Factor Analysis (with rotation) to visualize patterns](../../auto_examples/decomposition/plot_varimax_fa#sphx-glr-auto-examples-decomposition-plot-varimax-fa-py)
[Combine predictors using stacking](../../auto_examples/ensemble/plot_stack_predictors#sphx-glr-auto-examples-ensemble-plot-stack-predictors-py)
[Faces recognition example using eigenfaces and SVMs](../../auto_examples/applications/plot_face_recognition#sphx-glr-auto-examples-applications-plot-face-recognition-py)
[Prediction Latency](../../auto_examples/applications/plot_prediction_latency#sphx-glr-auto-examples-applications-plot-prediction-latency-py)
[Comparing Linear Bayesian Regressors](../../auto_examples/linear_model/plot_ard#sphx-glr-auto-examples-linear-model-plot-ard-py)
[L1 Penalty and Sparsity in Logistic Regression](../../auto_examples/linear_model/plot_logistic_l1_l2_sparsity#sphx-glr-auto-examples-linear-model-plot-logistic-l1-l2-sparsity-py)
[Lasso model selection via information criteria](../../auto_examples/linear_model/plot_lasso_lars_ic#sphx-glr-auto-examples-linear-model-plot-lasso-lars-ic-py)
[Lasso model selection: AIC-BIC / cross-validation](../../auto_examples/linear_model/plot_lasso_model_selection#sphx-glr-auto-examples-linear-model-plot-lasso-model-selection-py)
[MNIST classification using multinomial logistic + L1](../../auto_examples/linear_model/plot_sparse_logistic_regression_mnist#sphx-glr-auto-examples-linear-model-plot-sparse-logistic-regression-mnist-py)
[Poisson regression and non-normal loss](../../auto_examples/linear_model/plot_poisson_regression_non_normal_loss#sphx-glr-auto-examples-linear-model-plot-poisson-regression-non-normal-loss-py)
[Tweedie regression on insurance claims](../../auto_examples/linear_model/plot_tweedie_regression_insurance_claims#sphx-glr-auto-examples-linear-model-plot-tweedie-regression-insurance-claims-py)
[Common pitfalls in the interpretation of coefficients of linear models](../../auto_examples/inspection/plot_linear_model_coefficient_interpretation#sphx-glr-auto-examples-inspection-plot-linear-model-coefficient-interpretation-py)
[Advanced Plotting With Partial Dependence](../../auto_examples/miscellaneous/plot_partial_dependence_visualization_api#sphx-glr-auto-examples-miscellaneous-plot-partial-dependence-visualization-api-py)
[Displaying Pipelines](../../auto_examples/miscellaneous/plot_pipeline_display#sphx-glr-auto-examples-miscellaneous-plot-pipeline-display-py)
[Visualizations with Display Objects](../../auto_examples/miscellaneous/plot_display_object_visualization#sphx-glr-auto-examples-miscellaneous-plot-display-object-visualization-py)
[Detection error tradeoff (DET) curve](../../auto_examples/model_selection/plot_det#sphx-glr-auto-examples-model-selection-plot-det-py)
[Precision-Recall](../../auto_examples/model_selection/plot_precision_recall#sphx-glr-auto-examples-model-selection-plot-precision-recall-py)
[Comparing Nearest Neighbors with and without Neighborhood Components Analysis](../../auto_examples/neighbors/plot_nca_classification#sphx-glr-auto-examples-neighbors-plot-nca-classification-py)
[Dimensionality Reduction with Neighborhood Components Analysis](../../auto_examples/neighbors/plot_nca_dim_reduction#sphx-glr-auto-examples-neighbors-plot-nca-dim-reduction-py)
[Varying regularization in Multi-layer Perceptron](../../auto_examples/neural_networks/plot_mlp_alpha#sphx-glr-auto-examples-neural-networks-plot-mlp-alpha-py)
[Column Transformer with Mixed Types](../../auto_examples/compose/plot_column_transformer_mixed_types#sphx-glr-auto-examples-compose-plot-column-transformer-mixed-types-py)
[Pipelining: chaining a PCA and a logistic regression](../../auto_examples/compose/plot_digits_pipe#sphx-glr-auto-examples-compose-plot-digits-pipe-py)
[Compare the effect of different scalers on data with outliers](../../auto_examples/preprocessing/plot_all_scaling#sphx-glr-auto-examples-preprocessing-plot-all-scaling-py)
[Feature discretization](../../auto_examples/preprocessing/plot_discretization_classification#sphx-glr-auto-examples-preprocessing-plot-discretization-classification-py)
[Importance of Feature Scaling](../../auto_examples/preprocessing/plot_scaling_importance#sphx-glr-auto-examples-preprocessing-plot-scaling-importance-py)
[RBF SVM parameters](../../auto_examples/svm/plot_rbf_parameters#sphx-glr-auto-examples-svm-plot-rbf-parameters-py)
[SVM-Anova: SVM with univariate feature selection](../../auto_examples/svm/plot_svm_anova#sphx-glr-auto-examples-svm-plot-svm-anova-py)
| programming_docs |
scikit_learn sklearn.metrics.rand_score sklearn.metrics.rand\_score
===========================
sklearn.metrics.rand\_score(*labels\_true*, *labels\_pred*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/cluster/_supervised.py#L239)
Rand index.
The Rand Index computes a similarity measure between two clusterings by considering all pairs of samples and counting pairs that are assigned in the same or different clusters in the predicted and true clusterings.
The raw RI score is:
RI = (number of agreeing pairs) / (number of pairs)
Read more in the [User Guide](../clustering#rand-score).
Parameters:
**labels\_true**array-like of shape (n\_samples,), dtype=integral
Ground truth class labels to be used as a reference.
**labels\_pred**array-like of shape (n\_samples,), dtype=integral
Cluster labels to evaluate.
Returns:
**RI**float
Similarity score between 0.0 and 1.0, inclusive, 1.0 stands for perfect match.
See also
[`adjusted_rand_score`](sklearn.metrics.adjusted_rand_score#sklearn.metrics.adjusted_rand_score "sklearn.metrics.adjusted_rand_score")
Adjusted Rand Score
[`adjusted_mutual_info_score`](sklearn.metrics.adjusted_mutual_info_score#sklearn.metrics.adjusted_mutual_info_score "sklearn.metrics.adjusted_mutual_info_score")
Adjusted Mutual Information
#### References
#### Examples
Perfectly matching labelings have a score of 1 even
```
>>> from sklearn.metrics.cluster import rand_score
>>> rand_score([0, 0, 1, 1], [1, 1, 0, 0])
1.0
```
Labelings that assign all classes members to the same clusters are complete but may not always be pure, hence penalized:
```
>>> rand_score([0, 0, 1, 2], [0, 0, 1, 1])
0.83...
```
scikit_learn sklearn.metrics.mean_gamma_deviance sklearn.metrics.mean\_gamma\_deviance
=====================================
sklearn.metrics.mean\_gamma\_deviance(*y\_true*, *y\_pred*, *\**, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/_regression.py#L1130)
Mean Gamma deviance regression loss.
Gamma deviance is equivalent to the Tweedie deviance with the power parameter `power=2`. It is invariant to scaling of the target variable, and measures relative errors.
Read more in the [User Guide](../model_evaluation#mean-tweedie-deviance).
Parameters:
**y\_true**array-like of shape (n\_samples,)
Ground truth (correct) target values. Requires y\_true > 0.
**y\_pred**array-like of shape (n\_samples,)
Estimated target values. Requires y\_pred > 0.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**loss**float
A non-negative floating point value (the best value is 0.0).
#### Examples
```
>>> from sklearn.metrics import mean_gamma_deviance
>>> y_true = [2, 0.5, 1, 4]
>>> y_pred = [0.5, 0.5, 2., 2.]
>>> mean_gamma_deviance(y_true, y_pred)
1.0568...
```
scikit_learn sklearn.kernel_approximation.AdditiveChi2Sampler sklearn.kernel\_approximation.AdditiveChi2Sampler
=================================================
*class*sklearn.kernel\_approximation.AdditiveChi2Sampler(*\**, *sample\_steps=2*, *sample\_interval=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/kernel_approximation.py#L514)
Approximate feature map for additive chi2 kernel.
Uses sampling the fourier transform of the kernel characteristic at regular intervals.
Since the kernel that is to be approximated is additive, the components of the input vectors can be treated separately. Each entry in the original space is transformed into 2\*sample\_steps-1 features, where sample\_steps is a parameter of the method. Typical values of sample\_steps include 1, 2 and 3.
Optimal choices for the sampling interval for certain data ranges can be computed (see the reference). The default values should be reasonable.
Read more in the [User Guide](../kernel_approximation#additive-chi-kernel-approx).
Parameters:
**sample\_steps**int, default=2
Gives the number of (complex) sampling points.
**sample\_interval**float, default=None
Sampling interval. Must be specified when sample\_steps not in {1,2,3}.
Attributes:
**sample\_interval\_**float
Stored sampling interval. Specified as a parameter if `sample_steps` not in {1,2,3}.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`SkewedChi2Sampler`](sklearn.kernel_approximation.skewedchi2sampler#sklearn.kernel_approximation.SkewedChi2Sampler "sklearn.kernel_approximation.SkewedChi2Sampler")
A Fourier-approximation to a non-additive variant of the chi squared kernel.
[`sklearn.metrics.pairwise.chi2_kernel`](sklearn.metrics.pairwise.chi2_kernel#sklearn.metrics.pairwise.chi2_kernel "sklearn.metrics.pairwise.chi2_kernel")
The exact chi squared kernel.
[`sklearn.metrics.pairwise.additive_chi2_kernel`](sklearn.metrics.pairwise.additive_chi2_kernel#sklearn.metrics.pairwise.additive_chi2_kernel "sklearn.metrics.pairwise.additive_chi2_kernel")
The exact additive chi squared kernel.
#### Notes
This estimator approximates a slightly different version of the additive chi squared kernel then `metric.additive_chi2` computes.
#### References
See [“Efficient additive kernels via explicit feature maps”](http://www.robots.ox.ac.uk/~vedaldi/assets/pubs/vedaldi11efficient.pdf) A. Vedaldi and A. Zisserman, Pattern Analysis and Machine Intelligence, 2011
#### Examples
```
>>> from sklearn.datasets import load_digits
>>> from sklearn.linear_model import SGDClassifier
>>> from sklearn.kernel_approximation import AdditiveChi2Sampler
>>> X, y = load_digits(return_X_y=True)
>>> chi2sampler = AdditiveChi2Sampler(sample_steps=2)
>>> X_transformed = chi2sampler.fit_transform(X, y)
>>> clf = SGDClassifier(max_iter=5, random_state=0, tol=1e-3)
>>> clf.fit(X_transformed, y)
SGDClassifier(max_iter=5, random_state=0)
>>> clf.score(X_transformed, y)
0.9499...
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.kernel_approximation.AdditiveChi2Sampler.fit "sklearn.kernel_approximation.AdditiveChi2Sampler.fit")(X[, y]) | Set the parameters. |
| [`fit_transform`](#sklearn.kernel_approximation.AdditiveChi2Sampler.fit_transform "sklearn.kernel_approximation.AdditiveChi2Sampler.fit_transform")(X[, y]) | Fit to data, then transform it. |
| [`get_feature_names_out`](#sklearn.kernel_approximation.AdditiveChi2Sampler.get_feature_names_out "sklearn.kernel_approximation.AdditiveChi2Sampler.get_feature_names_out")([input\_features]) | Get output feature names for transformation. |
| [`get_params`](#sklearn.kernel_approximation.AdditiveChi2Sampler.get_params "sklearn.kernel_approximation.AdditiveChi2Sampler.get_params")([deep]) | Get parameters for this estimator. |
| [`set_params`](#sklearn.kernel_approximation.AdditiveChi2Sampler.set_params "sklearn.kernel_approximation.AdditiveChi2Sampler.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.kernel_approximation.AdditiveChi2Sampler.transform "sklearn.kernel_approximation.AdditiveChi2Sampler.transform")(X) | Apply approximate feature map to X. |
fit(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/kernel_approximation.py#L597)
Set the parameters.
Parameters:
**X**array-like, shape (n\_samples, n\_features)
Training data, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**array-like, shape (n\_samples,) or (n\_samples, n\_outputs), default=None
Target values (None for unsupervised transformations).
Returns:
**self**object
Returns the transformer.
fit\_transform(*X*, *y=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L839)
Fit to data, then transform it.
Fits transformer to `X` and `y` with optional parameters `fit_params` and returns a transformed version of `X`.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs), default=None
Target values (None for unsupervised transformations).
**\*\*fit\_params**dict
Additional fit parameters.
Returns:
**X\_new**ndarray array of shape (n\_samples, n\_features\_new)
Transformed array.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/kernel_approximation.py#L668)
Get output feature names for transformation.
Parameters:
**input\_features**array-like of str or None, default=None
Only used to validate feature names with the names seen in [`fit`](#sklearn.kernel_approximation.AdditiveChi2Sampler.fit "sklearn.kernel_approximation.AdditiveChi2Sampler.fit").
Returns:
**feature\_names\_out**ndarray of str objects
Transformed feature names.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/kernel_approximation.py#L635)
Apply approximate feature map to X.
Parameters:
**X**{array-like, sparse matrix}, shape (n\_samples, n\_features)
Training data, where `n_samples` is the number of samples and `n_features` is the number of features.
Returns:
**X\_new**{ndarray, sparse matrix}, shape = (n\_samples, n\_features \* (2\*sample\_steps - 1))
Whether the return value is an array or sparse matrix depends on the type of the input X.
scikit_learn sklearn.feature_selection.SelectKBest sklearn.feature\_selection.SelectKBest
======================================
*class*sklearn.feature\_selection.SelectKBest(*score\_func=<function f\_classif>*, *\**, *k=10*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_univariate_selection.py#L592)
Select features according to the k highest scores.
Read more in the [User Guide](../feature_selection#univariate-feature-selection).
Parameters:
**score\_func**callable, default=f\_classif
Function taking two arrays X and y, and returning a pair of arrays (scores, pvalues) or a single array with scores. Default is f\_classif (see below “See Also”). The default function only works with classification tasks.
New in version 0.18.
**k**int or “all”, default=10
Number of top features to select. The “all” option bypasses selection, for use in a parameter search.
Attributes:
**scores\_**array-like of shape (n\_features,)
Scores of features.
**pvalues\_**array-like of shape (n\_features,)
p-values of feature scores, None if `score_func` returned only scores.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`f_classif`](sklearn.feature_selection.f_classif#sklearn.feature_selection.f_classif "sklearn.feature_selection.f_classif")
ANOVA F-value between label/feature for classification tasks.
[`mutual_info_classif`](sklearn.feature_selection.mutual_info_classif#sklearn.feature_selection.mutual_info_classif "sklearn.feature_selection.mutual_info_classif")
Mutual information for a discrete target.
[`chi2`](sklearn.feature_selection.chi2#sklearn.feature_selection.chi2 "sklearn.feature_selection.chi2")
Chi-squared stats of non-negative features for classification tasks.
[`f_regression`](sklearn.feature_selection.f_regression#sklearn.feature_selection.f_regression "sklearn.feature_selection.f_regression")
F-value between label/feature for regression tasks.
[`mutual_info_regression`](sklearn.feature_selection.mutual_info_regression#sklearn.feature_selection.mutual_info_regression "sklearn.feature_selection.mutual_info_regression")
Mutual information for a continuous target.
[`SelectPercentile`](sklearn.feature_selection.selectpercentile#sklearn.feature_selection.SelectPercentile "sklearn.feature_selection.SelectPercentile")
Select features based on percentile of the highest scores.
[`SelectFpr`](sklearn.feature_selection.selectfpr#sklearn.feature_selection.SelectFpr "sklearn.feature_selection.SelectFpr")
Select features based on a false positive rate test.
[`SelectFdr`](sklearn.feature_selection.selectfdr#sklearn.feature_selection.SelectFdr "sklearn.feature_selection.SelectFdr")
Select features based on an estimated false discovery rate.
[`SelectFwe`](sklearn.feature_selection.selectfwe#sklearn.feature_selection.SelectFwe "sklearn.feature_selection.SelectFwe")
Select features based on family-wise error rate.
[`GenericUnivariateSelect`](sklearn.feature_selection.genericunivariateselect#sklearn.feature_selection.GenericUnivariateSelect "sklearn.feature_selection.GenericUnivariateSelect")
Univariate feature selector with configurable mode.
#### Notes
Ties between features with equal scores will be broken in an unspecified way.
#### Examples
```
>>> from sklearn.datasets import load_digits
>>> from sklearn.feature_selection import SelectKBest, chi2
>>> X, y = load_digits(return_X_y=True)
>>> X.shape
(1797, 64)
>>> X_new = SelectKBest(chi2, k=20).fit_transform(X, y)
>>> X_new.shape
(1797, 20)
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.feature_selection.SelectKBest.fit "sklearn.feature_selection.SelectKBest.fit")(X, y) | Run score function on (X, y) and get the appropriate features. |
| [`fit_transform`](#sklearn.feature_selection.SelectKBest.fit_transform "sklearn.feature_selection.SelectKBest.fit_transform")(X[, y]) | Fit to data, then transform it. |
| [`get_feature_names_out`](#sklearn.feature_selection.SelectKBest.get_feature_names_out "sklearn.feature_selection.SelectKBest.get_feature_names_out")([input\_features]) | Mask feature names according to selected features. |
| [`get_params`](#sklearn.feature_selection.SelectKBest.get_params "sklearn.feature_selection.SelectKBest.get_params")([deep]) | Get parameters for this estimator. |
| [`get_support`](#sklearn.feature_selection.SelectKBest.get_support "sklearn.feature_selection.SelectKBest.get_support")([indices]) | Get a mask, or integer index, of the features selected. |
| [`inverse_transform`](#sklearn.feature_selection.SelectKBest.inverse_transform "sklearn.feature_selection.SelectKBest.inverse_transform")(X) | Reverse the transformation operation. |
| [`set_params`](#sklearn.feature_selection.SelectKBest.set_params "sklearn.feature_selection.SelectKBest.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.feature_selection.SelectKBest.transform "sklearn.feature_selection.SelectKBest.transform")(X) | Reduce X to the selected features. |
fit(*X*, *y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_univariate_selection.py#L444)
Run score function on (X, y) and get the appropriate features.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The training input samples.
**y**array-like of shape (n\_samples,)
The target values (class labels in classification, real numbers in regression).
Returns:
**self**object
Returns the instance itself.
fit\_transform(*X*, *y=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L839)
Fit to data, then transform it.
Fits transformer to `X` and `y` with optional parameters `fit_params` and returns a transformed version of `X`.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs), default=None
Target values (None for unsupervised transformations).
**\*\*fit\_params**dict
Additional fit parameters.
Returns:
**X\_new**ndarray array of shape (n\_samples, n\_features\_new)
Transformed array.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_base.py#L146)
Mask feature names according to selected features.
Parameters:
**input\_features**array-like of str or None, default=None
Input features.
* If `input_features` is `None`, then `feature_names_in_` is used as feature names in. If `feature_names_in_` is not defined, then the following input feature names are generated: `["x0", "x1", ..., "x(n_features_in_ - 1)"]`.
* If `input_features` is an array-like, then `input_features` must match `feature_names_in_` if `feature_names_in_` is defined.
Returns:
**feature\_names\_out**ndarray of str objects
Transformed feature names.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
get\_support(*indices=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_base.py#L33)
Get a mask, or integer index, of the features selected.
Parameters:
**indices**bool, default=False
If True, the return value will be an array of integers, rather than a boolean mask.
Returns:
**support**array
An index that selects the retained features from a feature vector. If `indices` is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If `indices` is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector.
inverse\_transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_base.py#L106)
Reverse the transformation operation.
Parameters:
**X**array of shape [n\_samples, n\_selected\_features]
The input samples.
Returns:
**X\_r**array of shape [n\_samples, n\_original\_features]
`X` with columns of zeros inserted where features would have been removed by [`transform`](#sklearn.feature_selection.SelectKBest.transform "sklearn.feature_selection.SelectKBest.transform").
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_base.py#L68)
Reduce X to the selected features.
Parameters:
**X**array of shape [n\_samples, n\_features]
The input samples.
Returns:
**X\_r**array of shape [n\_samples, n\_selected\_features]
The input samples with only the selected features.
Examples using `sklearn.feature_selection.SelectKBest`
------------------------------------------------------
[Release Highlights for scikit-learn 1.1](../../auto_examples/release_highlights/plot_release_highlights_1_1_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-1-1-0-py)
[Pipeline ANOVA SVM](../../auto_examples/feature_selection/plot_feature_selection_pipeline#sphx-glr-auto-examples-feature-selection-plot-feature-selection-pipeline-py)
[Univariate Feature Selection](../../auto_examples/feature_selection/plot_feature_selection#sphx-glr-auto-examples-feature-selection-plot-feature-selection-py)
[Concatenating multiple feature extraction methods](../../auto_examples/compose/plot_feature_union#sphx-glr-auto-examples-compose-plot-feature-union-py)
[Selecting dimensionality reduction with Pipeline and GridSearchCV](../../auto_examples/compose/plot_compare_reduction#sphx-glr-auto-examples-compose-plot-compare-reduction-py)
| programming_docs |
scikit_learn sklearn.preprocessing.LabelEncoder sklearn.preprocessing.LabelEncoder
==================================
*class*sklearn.preprocessing.LabelEncoder[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_label.py#L35)
Encode target labels with value between 0 and n\_classes-1.
This transformer should be used to encode target values, *i.e.* `y`, and not the input `X`.
Read more in the [User Guide](../preprocessing_targets#preprocessing-targets).
New in version 0.12.
Attributes:
**classes\_**ndarray of shape (n\_classes,)
Holds the label for each class.
See also
[`OrdinalEncoder`](sklearn.preprocessing.ordinalencoder#sklearn.preprocessing.OrdinalEncoder "sklearn.preprocessing.OrdinalEncoder")
Encode categorical features using an ordinal encoding scheme.
[`OneHotEncoder`](sklearn.preprocessing.onehotencoder#sklearn.preprocessing.OneHotEncoder "sklearn.preprocessing.OneHotEncoder")
Encode categorical features as a one-hot numeric array.
#### Examples
`LabelEncoder` can be used to normalize labels.
```
>>> from sklearn import preprocessing
>>> le = preprocessing.LabelEncoder()
>>> le.fit([1, 2, 2, 6])
LabelEncoder()
>>> le.classes_
array([1, 2, 6])
>>> le.transform([1, 1, 2, 6])
array([0, 0, 1, 2]...)
>>> le.inverse_transform([0, 0, 1, 2])
array([1, 1, 2, 6])
```
It can also be used to transform non-numerical labels (as long as they are hashable and comparable) to numerical labels.
```
>>> le = preprocessing.LabelEncoder()
>>> le.fit(["paris", "paris", "tokyo", "amsterdam"])
LabelEncoder()
>>> list(le.classes_)
['amsterdam', 'paris', 'tokyo']
>>> le.transform(["tokyo", "tokyo", "paris"])
array([2, 2, 1]...)
>>> list(le.inverse_transform([2, 2, 1]))
['tokyo', 'tokyo', 'paris']
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.preprocessing.LabelEncoder.fit "sklearn.preprocessing.LabelEncoder.fit")(y) | Fit label encoder. |
| [`fit_transform`](#sklearn.preprocessing.LabelEncoder.fit_transform "sklearn.preprocessing.LabelEncoder.fit_transform")(y) | Fit label encoder and return encoded labels. |
| [`get_params`](#sklearn.preprocessing.LabelEncoder.get_params "sklearn.preprocessing.LabelEncoder.get_params")([deep]) | Get parameters for this estimator. |
| [`inverse_transform`](#sklearn.preprocessing.LabelEncoder.inverse_transform "sklearn.preprocessing.LabelEncoder.inverse_transform")(y) | Transform labels back to original encoding. |
| [`set_params`](#sklearn.preprocessing.LabelEncoder.set_params "sklearn.preprocessing.LabelEncoder.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.preprocessing.LabelEncoder.transform "sklearn.preprocessing.LabelEncoder.transform")(y) | Transform labels to normalized encoding. |
fit(*y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_label.py#L85)
Fit label encoder.
Parameters:
**y**array-like of shape (n\_samples,)
Target values.
Returns:
**self**returns an instance of self.
Fitted label encoder.
fit\_transform(*y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_label.py#L102)
Fit label encoder and return encoded labels.
Parameters:
**y**array-like of shape (n\_samples,)
Target values.
Returns:
**y**array-like of shape (n\_samples,)
Encoded labels.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
inverse\_transform(*y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_label.py#L140)
Transform labels back to original encoding.
Parameters:
**y**ndarray of shape (n\_samples,)
Target values.
Returns:
**y**ndarray of shape (n\_samples,)
Original encoding.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_label.py#L119)
Transform labels to normalized encoding.
Parameters:
**y**array-like of shape (n\_samples,)
Target values.
Returns:
**y**array-like of shape (n\_samples,)
Labels as normalized encodings.
scikit_learn sklearn.model_selection.RandomizedSearchCV sklearn.model\_selection.RandomizedSearchCV
===========================================
*class*sklearn.model\_selection.RandomizedSearchCV(*estimator*, *param\_distributions*, *\**, *n\_iter=10*, *scoring=None*, *n\_jobs=None*, *refit=True*, *cv=None*, *verbose=0*, *pre\_dispatch='2\*n\_jobs'*, *random\_state=None*, *error\_score=nan*, *return\_train\_score=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L1382)
Randomized search on hyper parameters.
RandomizedSearchCV implements a “fit” and a “score” method. It also implements “score\_samples”, “predict”, “predict\_proba”, “decision\_function”, “transform” and “inverse\_transform” if they are implemented in the estimator used.
The parameters of the estimator used to apply these methods are optimized by cross-validated search over parameter settings.
In contrast to GridSearchCV, not all parameter values are tried out, but rather a fixed number of parameter settings is sampled from the specified distributions. The number of parameter settings that are tried is given by n\_iter.
If all parameters are presented as a list, sampling without replacement is performed. If at least one parameter is given as a distribution, sampling with replacement is used. It is highly recommended to use continuous distributions for continuous parameters.
Read more in the [User Guide](../grid_search#randomized-parameter-search).
New in version 0.14.
Parameters:
**estimator**estimator object
A object of that type is instantiated for each grid point. This is assumed to implement the scikit-learn estimator interface. Either estimator needs to provide a `score` function, or `scoring` must be passed.
**param\_distributions**dict or list of dicts
Dictionary with parameters names (`str`) as keys and distributions or lists of parameters to try. Distributions must provide a `rvs` method for sampling (such as those from scipy.stats.distributions). If a list is given, it is sampled uniformly. If a list of dicts is given, first a dict is sampled uniformly, and then a parameter is sampled using that dict as above.
**n\_iter**int, default=10
Number of parameter settings that are sampled. n\_iter trades off runtime vs quality of the solution.
**scoring**str, callable, list, tuple or dict, default=None
Strategy to evaluate the performance of the cross-validated model on the test set.
If `scoring` represents a single score, one can use:
* a single string (see [The scoring parameter: defining model evaluation rules](../model_evaluation#scoring-parameter));
* a callable (see [Defining your scoring strategy from metric functions](../model_evaluation#scoring)) that returns a single value.
If `scoring` represents multiple scores, one can use:
* a list or tuple of unique strings;
* a callable returning a dictionary where the keys are the metric names and the values are the metric scores;
* a dictionary with metric names as keys and callables a values.
See [Specifying multiple metrics for evaluation](../grid_search#multimetric-grid-search) for an example.
If None, the estimator’s score method is used.
**n\_jobs**int, default=None
Number of jobs to run in parallel. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
Changed in version v0.20: `n_jobs` default changed from 1 to None
**refit**bool, str, or callable, default=True
Refit an estimator using the best found parameters on the whole dataset.
For multiple metric evaluation, this needs to be a `str` denoting the scorer that would be used to find the best parameters for refitting the estimator at the end.
Where there are considerations other than maximum score in choosing a best estimator, `refit` can be set to a function which returns the selected `best_index_` given the `cv_results`. In that case, the `best_estimator_` and `best_params_` will be set according to the returned `best_index_` while the `best_score_` attribute will not be available.
The refitted estimator is made available at the `best_estimator_` attribute and permits using `predict` directly on this `RandomizedSearchCV` instance.
Also for multiple metric evaluation, the attributes `best_index_`, `best_score_` and `best_params_` will only be available if `refit` is set and all of them will be determined w.r.t this specific scorer.
See `scoring` parameter to know more about multiple metric evaluation.
Changed in version 0.20: Support for callable added.
**cv**int, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are:
* None, to use the default 5-fold cross validation,
* integer, to specify the number of folds in a `(Stratified)KFold`,
* [CV splitter](https://scikit-learn.org/1.1/glossary.html#term-CV-splitter),
* An iterable yielding (train, test) splits as arrays of indices.
For integer/None inputs, if the estimator is a classifier and `y` is either binary or multiclass, [`StratifiedKFold`](sklearn.model_selection.stratifiedkfold#sklearn.model_selection.StratifiedKFold "sklearn.model_selection.StratifiedKFold") is used. In all other cases, [`KFold`](sklearn.model_selection.kfold#sklearn.model_selection.KFold "sklearn.model_selection.KFold") is used. These splitters are instantiated with `shuffle=False` so the splits will be the same across calls.
Refer [User Guide](../cross_validation#cross-validation) for the various cross-validation strategies that can be used here.
Changed in version 0.22: `cv` default value if None changed from 3-fold to 5-fold.
**verbose**int
Controls the verbosity: the higher, the more messages.
**pre\_dispatch**int, or str, default=’2\*n\_jobs’
Controls the number of jobs that get dispatched during parallel execution. Reducing this number can be useful to avoid an explosion of memory consumption when more jobs get dispatched than CPUs can process. This parameter can be:
* None, in which case all the jobs are immediately created and spawned. Use this for lightweight and fast-running jobs, to avoid delays due to on-demand spawning of the jobs
* An int, giving the exact number of total jobs that are spawned
* A str, giving an expression as a function of n\_jobs, as in ‘2\*n\_jobs’
**random\_state**int, RandomState instance or None, default=None
Pseudo random number generator state used for random uniform sampling from lists of possible values instead of scipy.stats distributions. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**error\_score**‘raise’ or numeric, default=np.nan
Value to assign to the score if an error occurs in estimator fitting. If set to ‘raise’, the error is raised. If a numeric value is given, FitFailedWarning is raised. This parameter does not affect the refit step, which will always raise the error.
**return\_train\_score**bool, default=False
If `False`, the `cv_results_` attribute will not include training scores. Computing training scores is used to get insights on how different parameter settings impact the overfitting/underfitting trade-off. However computing the scores on the training set can be computationally expensive and is not strictly required to select the parameters that yield the best generalization performance.
New in version 0.19.
Changed in version 0.21: Default value was changed from `True` to `False`
Attributes:
**cv\_results\_**dict of numpy (masked) ndarrays
A dict with keys as column headers and values as columns, that can be imported into a pandas `DataFrame`.
For instance the below given table
| param\_kernel | param\_gamma | split0\_test\_score | … | rank\_test\_score |
| --- | --- | --- | --- | --- |
| ‘rbf’ | 0.1 | 0.80 | … | 1 |
| ‘rbf’ | 0.2 | 0.84 | … | 3 |
| ‘rbf’ | 0.3 | 0.70 | … | 2 |
will be represented by a `cv_results_` dict of:
```
{
'param_kernel' : masked_array(data = ['rbf', 'rbf', 'rbf'],
mask = False),
'param_gamma' : masked_array(data = [0.1 0.2 0.3], mask = False),
'split0_test_score' : [0.80, 0.84, 0.70],
'split1_test_score' : [0.82, 0.50, 0.70],
'mean_test_score' : [0.81, 0.67, 0.70],
'std_test_score' : [0.01, 0.24, 0.00],
'rank_test_score' : [1, 3, 2],
'split0_train_score' : [0.80, 0.92, 0.70],
'split1_train_score' : [0.82, 0.55, 0.70],
'mean_train_score' : [0.81, 0.74, 0.70],
'std_train_score' : [0.01, 0.19, 0.00],
'mean_fit_time' : [0.73, 0.63, 0.43],
'std_fit_time' : [0.01, 0.02, 0.01],
'mean_score_time' : [0.01, 0.06, 0.04],
'std_score_time' : [0.00, 0.00, 0.00],
'params' : [{'kernel' : 'rbf', 'gamma' : 0.1}, ...],
}
```
NOTE
The key `'params'` is used to store a list of parameter settings dicts for all the parameter candidates.
The `mean_fit_time`, `std_fit_time`, `mean_score_time` and `std_score_time` are all in seconds.
For multi-metric evaluation, the scores for all the scorers are available in the `cv_results_` dict at the keys ending with that scorer’s name (`'_<scorer_name>'`) instead of `'_score'` shown above. (‘split0\_test\_precision’, ‘mean\_train\_precision’ etc.)
**best\_estimator\_**estimator
Estimator that was chosen by the search, i.e. estimator which gave highest score (or smallest loss if specified) on the left out data. Not available if `refit=False`.
For multi-metric evaluation, this attribute is present only if `refit` is specified.
See `refit` parameter for more information on allowed values.
**best\_score\_**float
Mean cross-validated score of the best\_estimator.
For multi-metric evaluation, this is not available if `refit` is `False`. See `refit` parameter for more information.
This attribute is not available if `refit` is a function.
**best\_params\_**dict
Parameter setting that gave the best results on the hold out data.
For multi-metric evaluation, this is not available if `refit` is `False`. See `refit` parameter for more information.
**best\_index\_**int
The index (of the `cv_results_` arrays) which corresponds to the best candidate parameter setting.
The dict at `search.cv_results_['params'][search.best_index_]` gives the parameter setting for the best model, that gives the highest mean score (`search.best_score_`).
For multi-metric evaluation, this is not available if `refit` is `False`. See `refit` parameter for more information.
**scorer\_**function or a dict
Scorer function used on the held out data to choose the best parameters for the model.
For multi-metric evaluation, this attribute holds the validated `scoring` dict which maps the scorer key to the scorer callable.
**n\_splits\_**int
The number of cross-validation splits (folds/iterations).
**refit\_time\_**float
Seconds used for refitting the best model on the whole dataset.
This is present only if `refit` is not False.
New in version 0.20.
**multimetric\_**bool
Whether or not the scorers compute several metrics.
[`classes_`](#sklearn.model_selection.RandomizedSearchCV.classes_ "sklearn.model_selection.RandomizedSearchCV.classes_")ndarray of shape (n\_classes,)
Class labels.
[`n_features_in_`](#sklearn.model_selection.RandomizedSearchCV.n_features_in_ "sklearn.model_selection.RandomizedSearchCV.n_features_in_")int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Only defined if `best_estimator_` is defined (see the documentation for the `refit` parameter for more details) and that `best_estimator_` exposes `feature_names_in_` when fit.
New in version 1.0.
See also
[`GridSearchCV`](sklearn.model_selection.gridsearchcv#sklearn.model_selection.GridSearchCV "sklearn.model_selection.GridSearchCV")
Does exhaustive search over a grid of parameters.
[`ParameterSampler`](sklearn.model_selection.parametersampler#sklearn.model_selection.ParameterSampler "sklearn.model_selection.ParameterSampler")
A generator over parameter settings, constructed from param\_distributions.
#### Notes
The parameters selected are those that maximize the score of the held-out data, according to the scoring parameter.
If `n_jobs` was set to a value higher than one, the data is copied for each parameter setting(and not `n_jobs` times). This is done for efficiency reasons if individual jobs take very little time, but may raise errors if the dataset is large and not enough memory is available. A workaround in this case is to set `pre_dispatch`. Then, the memory is copied only `pre_dispatch` many times. A reasonable value for `pre_dispatch` is `2 *
n_jobs`.
#### Examples
```
>>> from sklearn.datasets import load_iris
>>> from sklearn.linear_model import LogisticRegression
>>> from sklearn.model_selection import RandomizedSearchCV
>>> from scipy.stats import uniform
>>> iris = load_iris()
>>> logistic = LogisticRegression(solver='saga', tol=1e-2, max_iter=200,
... random_state=0)
>>> distributions = dict(C=uniform(loc=0, scale=4),
... penalty=['l2', 'l1'])
>>> clf = RandomizedSearchCV(logistic, distributions, random_state=0)
>>> search = clf.fit(iris.data, iris.target)
>>> search.best_params_
{'C': 2..., 'penalty': 'l1'}
```
#### Methods
| | |
| --- | --- |
| [`decision_function`](#sklearn.model_selection.RandomizedSearchCV.decision_function "sklearn.model_selection.RandomizedSearchCV.decision_function")(X) | Call decision\_function on the estimator with the best found parameters. |
| [`fit`](#sklearn.model_selection.RandomizedSearchCV.fit "sklearn.model_selection.RandomizedSearchCV.fit")(X[, y, groups]) | Run fit with all sets of parameters. |
| [`get_params`](#sklearn.model_selection.RandomizedSearchCV.get_params "sklearn.model_selection.RandomizedSearchCV.get_params")([deep]) | Get parameters for this estimator. |
| [`inverse_transform`](#sklearn.model_selection.RandomizedSearchCV.inverse_transform "sklearn.model_selection.RandomizedSearchCV.inverse_transform")(Xt) | Call inverse\_transform on the estimator with the best found params. |
| [`predict`](#sklearn.model_selection.RandomizedSearchCV.predict "sklearn.model_selection.RandomizedSearchCV.predict")(X) | Call predict on the estimator with the best found parameters. |
| [`predict_log_proba`](#sklearn.model_selection.RandomizedSearchCV.predict_log_proba "sklearn.model_selection.RandomizedSearchCV.predict_log_proba")(X) | Call predict\_log\_proba on the estimator with the best found parameters. |
| [`predict_proba`](#sklearn.model_selection.RandomizedSearchCV.predict_proba "sklearn.model_selection.RandomizedSearchCV.predict_proba")(X) | Call predict\_proba on the estimator with the best found parameters. |
| [`score`](#sklearn.model_selection.RandomizedSearchCV.score "sklearn.model_selection.RandomizedSearchCV.score")(X[, y]) | Return the score on the given data, if the estimator has been refit. |
| [`score_samples`](#sklearn.model_selection.RandomizedSearchCV.score_samples "sklearn.model_selection.RandomizedSearchCV.score_samples")(X) | Call score\_samples on the estimator with the best found parameters. |
| [`set_params`](#sklearn.model_selection.RandomizedSearchCV.set_params "sklearn.model_selection.RandomizedSearchCV.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.model_selection.RandomizedSearchCV.transform "sklearn.model_selection.RandomizedSearchCV.transform")(X) | Call transform on the estimator with the best found parameters. |
*property*classes\_
Class labels.
Only available when `refit=True` and the estimator is a classifier.
decision\_function(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L548)
Call decision\_function on the estimator with the best found parameters.
Only available if `refit=True` and the underlying estimator supports `decision_function`.
Parameters:
**X**indexable, length n\_samples
Must fulfill the input assumptions of the underlying estimator.
Returns:
**y\_score**ndarray of shape (n\_samples,) or (n\_samples, n\_classes) or (n\_samples, n\_classes \* (n\_classes-1) / 2)
Result of the decision function for `X` based on the estimator with the best found parameters.
fit(*X*, *y=None*, *\**, *groups=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L738)
Run fit with all sets of parameters.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training vector, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**array-like of shape (n\_samples, n\_output) or (n\_samples,), default=None
Target relative to X for classification or regression; None for unsupervised learning.
**groups**array-like of shape (n\_samples,), default=None
Group labels for the samples used while splitting the dataset into train/test set. Only used in conjunction with a “Group” [cv](https://scikit-learn.org/1.1/glossary.html#term-cv) instance (e.g., [`GroupKFold`](sklearn.model_selection.groupkfold#sklearn.model_selection.GroupKFold "sklearn.model_selection.GroupKFold")).
**\*\*fit\_params**dict of str -> object
Parameters passed to the `fit` method of the estimator.
If a fit parameter is an array-like whose length is equal to `num_samples` then it will be split across CV groups along with `X` and `y`. For example, the [sample\_weight](https://scikit-learn.org/1.1/glossary.html#term-sample_weight) parameter is split because `len(sample_weights) = len(X)`.
Returns:
**self**object
Instance of fitted estimator.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
inverse\_transform(*Xt*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L593)
Call inverse\_transform on the estimator with the best found params.
Only available if the underlying estimator implements `inverse_transform` and `refit=True`.
Parameters:
**Xt**indexable, length n\_samples
Must fulfill the input assumptions of the underlying estimator.
Returns:
**X**{ndarray, sparse matrix} of shape (n\_samples, n\_features)
Result of the `inverse_transform` function for `Xt` based on the estimator with the best found parameters.
*property*n\_features\_in\_
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
Only available when `refit=True`.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L480)
Call predict on the estimator with the best found parameters.
Only available if `refit=True` and the underlying estimator supports `predict`.
Parameters:
**X**indexable, length n\_samples
Must fulfill the input assumptions of the underlying estimator.
Returns:
**y\_pred**ndarray of shape (n\_samples,)
The predicted labels or values for `X` based on the estimator with the best found parameters.
predict\_log\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L525)
Call predict\_log\_proba on the estimator with the best found parameters.
Only available if `refit=True` and the underlying estimator supports `predict_log_proba`.
Parameters:
**X**indexable, length n\_samples
Must fulfill the input assumptions of the underlying estimator.
Returns:
**y\_pred**ndarray of shape (n\_samples,) or (n\_samples, n\_classes)
Predicted class log-probabilities for `X` based on the estimator with the best found parameters. The order of the classes corresponds to that in the fitted attribute [classes\_](https://scikit-learn.org/1.1/glossary.html#term-classes_).
predict\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L502)
Call predict\_proba on the estimator with the best found parameters.
Only available if `refit=True` and the underlying estimator supports `predict_proba`.
Parameters:
**X**indexable, length n\_samples
Must fulfill the input assumptions of the underlying estimator.
Returns:
**y\_pred**ndarray of shape (n\_samples,) or (n\_samples, n\_classes)
Predicted class probabilities for `X` based on the estimator with the best found parameters. The order of the classes corresponds to that in the fitted attribute [classes\_](https://scikit-learn.org/1.1/glossary.html#term-classes_).
score(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L413)
Return the score on the given data, if the estimator has been refit.
This uses the score defined by `scoring` where provided, and the `best_estimator_.score` method otherwise.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input data, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**array-like of shape (n\_samples, n\_output) or (n\_samples,), default=None
Target relative to X for classification or regression; None for unsupervised learning.
Returns:
**score**float
The score defined by `scoring` if provided, and the `best_estimator_.score` method otherwise.
score\_samples(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L457)
Call score\_samples on the estimator with the best found parameters.
Only available if `refit=True` and the underlying estimator supports `score_samples`.
New in version 0.24.
Parameters:
**X**iterable
Data to predict on. Must fulfill input requirements of the underlying estimator.
Returns:
**y\_score**ndarray of shape (n\_samples,)
The `best_estimator_.score_samples` method.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L571)
Call transform on the estimator with the best found parameters.
Only available if the underlying estimator supports `transform` and `refit=True`.
Parameters:
**X**indexable, length n\_samples
Must fulfill the input assumptions of the underlying estimator.
Returns:
**Xt**{ndarray, sparse matrix} of shape (n\_samples, n\_features)
`X` transformed in the new space based on the estimator with the best found parameters.
Examples using `sklearn.model_selection.RandomizedSearchCV`
-----------------------------------------------------------
[Release Highlights for scikit-learn 0.24](../../auto_examples/release_highlights/plot_release_highlights_0_24_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-0-24-0-py)
[Faces recognition example using eigenfaces and SVMs](../../auto_examples/applications/plot_face_recognition#sphx-glr-auto-examples-applications-plot-face-recognition-py)
[Comparison of kernel ridge and Gaussian process regression](../../auto_examples/gaussian_process/plot_compare_gpr_krr#sphx-glr-auto-examples-gaussian-process-plot-compare-gpr-krr-py)
[Comparing randomized search and grid search for hyperparameter estimation](../../auto_examples/model_selection/plot_randomized_search#sphx-glr-auto-examples-model-selection-plot-randomized-search-py)
| programming_docs |
scikit_learn sklearn.mixture.BayesianGaussianMixture sklearn.mixture.BayesianGaussianMixture
=======================================
*class*sklearn.mixture.BayesianGaussianMixture(*\**, *n\_components=1*, *covariance\_type='full'*, *tol=0.001*, *reg\_covar=1e-06*, *max\_iter=100*, *n\_init=1*, *init\_params='kmeans'*, *weight\_concentration\_prior\_type='dirichlet\_process'*, *weight\_concentration\_prior=None*, *mean\_precision\_prior=None*, *mean\_prior=None*, *degrees\_of\_freedom\_prior=None*, *covariance\_prior=None*, *random\_state=None*, *warm\_start=False*, *verbose=0*, *verbose\_interval=10*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/mixture/_bayesian_mixture.py#L69)
Variational Bayesian estimation of a Gaussian mixture.
This class allows to infer an approximate posterior distribution over the parameters of a Gaussian mixture distribution. The effective number of components can be inferred from the data.
This class implements two types of prior for the weights distribution: a finite mixture model with Dirichlet distribution and an infinite mixture model with the Dirichlet Process. In practice Dirichlet Process inference algorithm is approximated and uses a truncated distribution with a fixed maximum number of components (called the Stick-breaking representation). The number of components actually used almost always depends on the data.
New in version 0.18.
Read more in the [User Guide](../mixture#bgmm).
Parameters:
**n\_components**int, default=1
The number of mixture components. Depending on the data and the value of the `weight_concentration_prior` the model can decide to not use all the components by setting some component `weights_` to values very close to zero. The number of effective components is therefore smaller than n\_components.
**covariance\_type**{‘full’, ‘tied’, ‘diag’, ‘spherical’}, default=’full’
String describing the type of covariance parameters to use. Must be one of:
```
'full' (each component has its own general covariance matrix),
'tied' (all components share the same general covariance matrix),
'diag' (each component has its own diagonal covariance matrix),
'spherical' (each component has its own single variance).
```
**tol**float, default=1e-3
The convergence threshold. EM iterations will stop when the lower bound average gain on the likelihood (of the training data with respect to the model) is below this threshold.
**reg\_covar**float, default=1e-6
Non-negative regularization added to the diagonal of covariance. Allows to assure that the covariance matrices are all positive.
**max\_iter**int, default=100
The number of EM iterations to perform.
**n\_init**int, default=1
The number of initializations to perform. The result with the highest lower bound value on the likelihood is kept.
**init\_params**{‘kmeans’, ‘k-means++’, ‘random’, ‘random\_from\_data’}, default=’kmeans’
The method used to initialize the weights, the means and the covariances. String must be one of:
‘kmeans’ : responsibilities are initialized using kmeans. ‘k-means++’ : use the k-means++ method to initialize. ‘random’ : responsibilities are initialized randomly. ‘random\_from\_data’ : initial means are randomly selected data points.
Changed in version v1.1: `init_params` now accepts ‘random\_from\_data’ and ‘k-means++’ as initialization methods.
**weight\_concentration\_prior\_type**str, default=’dirichlet\_process’
String describing the type of the weight concentration prior. Must be one of:
```
'dirichlet_process' (using the Stick-breaking representation),
'dirichlet_distribution' (can favor more uniform weights).
```
**weight\_concentration\_prior**float or None, default=None
The dirichlet concentration of each component on the weight distribution (Dirichlet). This is commonly called gamma in the literature. The higher concentration puts more mass in the center and will lead to more components being active, while a lower concentration parameter will lead to more mass at the edge of the mixture weights simplex. The value of the parameter must be greater than 0. If it is None, it’s set to `1. / n_components`.
**mean\_precision\_prior**float or None, default=None
The precision prior on the mean distribution (Gaussian). Controls the extent of where means can be placed. Larger values concentrate the cluster means around `mean_prior`. The value of the parameter must be greater than 0. If it is None, it is set to 1.
**mean\_prior**array-like, shape (n\_features,), default=None
The prior on the mean distribution (Gaussian). If it is None, it is set to the mean of X.
**degrees\_of\_freedom\_prior**float or None, default=None
The prior of the number of degrees of freedom on the covariance distributions (Wishart). If it is None, it’s set to `n_features`.
**covariance\_prior**float or array-like, default=None
The prior on the covariance distribution (Wishart). If it is None, the emiprical covariance prior is initialized using the covariance of X. The shape depends on `covariance_type`:
```
(n_features, n_features) if 'full',
(n_features, n_features) if 'tied',
(n_features) if 'diag',
float if 'spherical'
```
**random\_state**int, RandomState instance or None, default=None
Controls the random seed given to the method chosen to initialize the parameters (see `init_params`). In addition, it controls the generation of random samples from the fitted distribution (see the method `sample`). Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**warm\_start**bool, default=False
If ‘warm\_start’ is True, the solution of the last fitting is used as initialization for the next call of fit(). This can speed up convergence when fit is called several times on similar problems. See [the Glossary](https://scikit-learn.org/1.1/glossary.html#term-warm_start).
**verbose**int, default=0
Enable verbose output. If 1 then it prints the current initialization and each iteration step. If greater than 1 then it prints also the log probability and the time needed for each step.
**verbose\_interval**int, default=10
Number of iteration done before the next print.
Attributes:
**weights\_**array-like of shape (n\_components,)
The weights of each mixture components.
**means\_**array-like of shape (n\_components, n\_features)
The mean of each mixture component.
**covariances\_**array-like
The covariance of each mixture component. The shape depends on `covariance_type`:
```
(n_components,) if 'spherical',
(n_features, n_features) if 'tied',
(n_components, n_features) if 'diag',
(n_components, n_features, n_features) if 'full'
```
**precisions\_**array-like
The precision matrices for each component in the mixture. A precision matrix is the inverse of a covariance matrix. A covariance matrix is symmetric positive definite so the mixture of Gaussian can be equivalently parameterized by the precision matrices. Storing the precision matrices instead of the covariance matrices makes it more efficient to compute the log-likelihood of new samples at test time. The shape depends on `covariance_type`:
```
(n_components,) if 'spherical',
(n_features, n_features) if 'tied',
(n_components, n_features) if 'diag',
(n_components, n_features, n_features) if 'full'
```
**precisions\_cholesky\_**array-like
The cholesky decomposition of the precision matrices of each mixture component. A precision matrix is the inverse of a covariance matrix. A covariance matrix is symmetric positive definite so the mixture of Gaussian can be equivalently parameterized by the precision matrices. Storing the precision matrices instead of the covariance matrices makes it more efficient to compute the log-likelihood of new samples at test time. The shape depends on `covariance_type`:
```
(n_components,) if 'spherical',
(n_features, n_features) if 'tied',
(n_components, n_features) if 'diag',
(n_components, n_features, n_features) if 'full'
```
**converged\_**bool
True when convergence was reached in fit(), False otherwise.
**n\_iter\_**int
Number of step used by the best fit of inference to reach the convergence.
**lower\_bound\_**float
Lower bound value on the model evidence (of the training data) of the best fit of inference.
**weight\_concentration\_prior\_**tuple or float
The dirichlet concentration of each component on the weight distribution (Dirichlet). The type depends on `weight_concentration_prior_type`:
```
(float, float) if 'dirichlet_process' (Beta parameters),
float if 'dirichlet_distribution' (Dirichlet parameters).
```
The higher concentration puts more mass in the center and will lead to more components being active, while a lower concentration parameter will lead to more mass at the edge of the simplex.
**weight\_concentration\_**array-like of shape (n\_components,)
The dirichlet concentration of each component on the weight distribution (Dirichlet).
**mean\_precision\_prior\_**float
The precision prior on the mean distribution (Gaussian). Controls the extent of where means can be placed. Larger values concentrate the cluster means around `mean_prior`. If mean\_precision\_prior is set to None, `mean_precision_prior_` is set to 1.
**mean\_precision\_**array-like of shape (n\_components,)
The precision of each components on the mean distribution (Gaussian).
**mean\_prior\_**array-like of shape (n\_features,)
The prior on the mean distribution (Gaussian).
**degrees\_of\_freedom\_prior\_**float
The prior of the number of degrees of freedom on the covariance distributions (Wishart).
**degrees\_of\_freedom\_**array-like of shape (n\_components,)
The number of degrees of freedom of each components in the model.
**covariance\_prior\_**float or array-like
The prior on the covariance distribution (Wishart). The shape depends on `covariance_type`:
```
(n_features, n_features) if 'full',
(n_features, n_features) if 'tied',
(n_features) if 'diag',
float if 'spherical'
```
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`GaussianMixture`](sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture "sklearn.mixture.GaussianMixture")
Finite Gaussian mixture fit with EM.
#### References
[1] [Bishop, Christopher M. (2006). “Pattern recognition and machine learning”. Vol. 4 No. 4. New York: Springer.](https://www.springer.com/kr/book/9780387310732)
[2] [Hagai Attias. (2000). “A Variational Bayesian Framework for Graphical Models”. In Advances in Neural Information Processing Systems 12.](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.36.2841&rep=rep1&type=pdf)
[3] [Blei, David M. and Michael I. Jordan. (2006). “Variational inference for Dirichlet process mixtures”. Bayesian analysis 1.1](https://www.cs.princeton.edu/courses/archive/fall11/cos597C/reading/BleiJordan2005.pdf)
#### Examples
```
>>> import numpy as np
>>> from sklearn.mixture import BayesianGaussianMixture
>>> X = np.array([[1, 2], [1, 4], [1, 0], [4, 2], [12, 4], [10, 7]])
>>> bgm = BayesianGaussianMixture(n_components=2, random_state=42).fit(X)
>>> bgm.means_
array([[2.49... , 2.29...],
[8.45..., 4.52... ]])
>>> bgm.predict([[0, 0], [9, 3]])
array([0, 1])
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.mixture.BayesianGaussianMixture.fit "sklearn.mixture.BayesianGaussianMixture.fit")(X[, y]) | Estimate model parameters with the EM algorithm. |
| [`fit_predict`](#sklearn.mixture.BayesianGaussianMixture.fit_predict "sklearn.mixture.BayesianGaussianMixture.fit_predict")(X[, y]) | Estimate model parameters using X and predict the labels for X. |
| [`get_params`](#sklearn.mixture.BayesianGaussianMixture.get_params "sklearn.mixture.BayesianGaussianMixture.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.mixture.BayesianGaussianMixture.predict "sklearn.mixture.BayesianGaussianMixture.predict")(X) | Predict the labels for the data samples in X using trained model. |
| [`predict_proba`](#sklearn.mixture.BayesianGaussianMixture.predict_proba "sklearn.mixture.BayesianGaussianMixture.predict_proba")(X) | Evaluate the components' density for each sample. |
| [`sample`](#sklearn.mixture.BayesianGaussianMixture.sample "sklearn.mixture.BayesianGaussianMixture.sample")([n\_samples]) | Generate random samples from the fitted Gaussian distribution. |
| [`score`](#sklearn.mixture.BayesianGaussianMixture.score "sklearn.mixture.BayesianGaussianMixture.score")(X[, y]) | Compute the per-sample average log-likelihood of the given data X. |
| [`score_samples`](#sklearn.mixture.BayesianGaussianMixture.score_samples "sklearn.mixture.BayesianGaussianMixture.score_samples")(X) | Compute the log-likelihood of each sample. |
| [`set_params`](#sklearn.mixture.BayesianGaussianMixture.set_params "sklearn.mixture.BayesianGaussianMixture.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/mixture/_base.py#L174)
Estimate model parameters with the EM algorithm.
The method fits the model `n_init` times and sets the parameters with which the model has the largest likelihood or lower bound. Within each trial, the method iterates between E-step and M-step for `max_iter` times until the change of likelihood or lower bound is less than `tol`, otherwise, a `ConvergenceWarning` is raised. If `warm_start` is `True`, then `n_init` is ignored and a single initialization is performed upon the first call. Upon consecutive calls, training starts where it left off.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
List of n\_features-dimensional data points. Each row corresponds to a single data point.
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**self**object
The fitted mixture.
fit\_predict(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/mixture/_base.py#L203)
Estimate model parameters using X and predict the labels for X.
The method fits the model n\_init times and sets the parameters with which the model has the largest likelihood or lower bound. Within each trial, the method iterates between E-step and M-step for `max_iter` times until the change of likelihood or lower bound is less than `tol`, otherwise, a [`ConvergenceWarning`](sklearn.exceptions.convergencewarning#sklearn.exceptions.ConvergenceWarning "sklearn.exceptions.ConvergenceWarning") is raised. After fitting, it predicts the most probable label for the input data points.
New in version 0.20.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
List of n\_features-dimensional data points. Each row corresponds to a single data point.
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**labels**array, shape (n\_samples,)
Component labels.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/mixture/_base.py#L384)
Predict the labels for the data samples in X using trained model.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
List of n\_features-dimensional data points. Each row corresponds to a single data point.
Returns:
**labels**array, shape (n\_samples,)
Component labels.
predict\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/mixture/_base.py#L402)
Evaluate the components’ density for each sample.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
List of n\_features-dimensional data points. Each row corresponds to a single data point.
Returns:
**resp**array, shape (n\_samples, n\_components)
Density of each Gaussian component for each sample in X.
sample(*n\_samples=1*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/mixture/_base.py#L421)
Generate random samples from the fitted Gaussian distribution.
Parameters:
**n\_samples**int, default=1
Number of samples to generate.
Returns:
**X**array, shape (n\_samples, n\_features)
Randomly generated sample.
**y**array, shape (nsamples,)
Component labels.
score(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/mixture/_base.py#L365)
Compute the per-sample average log-likelihood of the given data X.
Parameters:
**X**array-like of shape (n\_samples, n\_dimensions)
List of n\_features-dimensional data points. Each row corresponds to a single data point.
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**log\_likelihood**float
Log-likelihood of `X` under the Gaussian mixture model.
score\_samples(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/mixture/_base.py#L346)
Compute the log-likelihood of each sample.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
List of n\_features-dimensional data points. Each row corresponds to a single data point.
Returns:
**log\_prob**array, shape (n\_samples,)
Log-likelihood of each sample in `X` under the current model.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.mixture.BayesianGaussianMixture`
--------------------------------------------------------
[Concentration Prior Type Analysis of Variation Bayesian Gaussian Mixture](../../auto_examples/mixture/plot_concentration_prior#sphx-glr-auto-examples-mixture-plot-concentration-prior-py)
[Gaussian Mixture Model Ellipsoids](../../auto_examples/mixture/plot_gmm#sphx-glr-auto-examples-mixture-plot-gmm-py)
[Gaussian Mixture Model Sine Curve](../../auto_examples/mixture/plot_gmm_sin#sphx-glr-auto-examples-mixture-plot-gmm-sin-py)
scikit_learn sklearn.linear_model.orthogonal_mp sklearn.linear\_model.orthogonal\_mp
====================================
sklearn.linear\_model.orthogonal\_mp(*X*, *y*, *\**, *n\_nonzero\_coefs=None*, *tol=None*, *precompute=False*, *copy\_X=True*, *return\_path=False*, *return\_n\_iter=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_omp.py#L283)
Orthogonal Matching Pursuit (OMP).
Solves n\_targets Orthogonal Matching Pursuit problems. An instance of the problem has the form:
When parametrized by the number of non-zero coefficients using `n_nonzero_coefs`: argmin ||y - Xgamma||^2 subject to ||gamma||\_0 <= n\_{nonzero coefs}
When parametrized by error using the parameter `tol`: argmin ||gamma||\_0 subject to ||y - Xgamma||^2 <= tol
Read more in the [User Guide](../linear_model#omp).
Parameters:
**X**ndarray of shape (n\_samples, n\_features)
Input data. Columns are assumed to have unit norm.
**y**ndarray of shape (n\_samples,) or (n\_samples, n\_targets)
Input targets.
**n\_nonzero\_coefs**int, default=None
Desired number of non-zero entries in the solution. If None (by default) this value is set to 10% of n\_features.
**tol**float, default=None
Maximum norm of the residual. If not None, overrides n\_nonzero\_coefs.
**precompute**‘auto’ or bool, default=False
Whether to perform precomputations. Improves performance when n\_targets or n\_samples is very large.
**copy\_X**bool, default=True
Whether the design matrix X must be copied by the algorithm. A false value is only helpful if X is already Fortran-ordered, otherwise a copy is made anyway.
**return\_path**bool, default=False
Whether to return every value of the nonzero coefficients along the forward path. Useful for cross-validation.
**return\_n\_iter**bool, default=False
Whether or not to return the number of iterations.
Returns:
**coef**ndarray of shape (n\_features,) or (n\_features, n\_targets)
Coefficients of the OMP solution. If `return_path=True`, this contains the whole coefficient path. In this case its shape is (n\_features, n\_features) or (n\_features, n\_targets, n\_features) and iterating over the last axis generates coefficients in increasing order of active features.
**n\_iters**array-like or int
Number of active features across every target. Returned only if `return_n_iter` is set to True.
See also
[`OrthogonalMatchingPursuit`](sklearn.linear_model.orthogonalmatchingpursuit#sklearn.linear_model.OrthogonalMatchingPursuit "sklearn.linear_model.OrthogonalMatchingPursuit")
Orthogonal Matching Pursuit model.
[`orthogonal_mp_gram`](sklearn.linear_model.orthogonal_mp_gram#sklearn.linear_model.orthogonal_mp_gram "sklearn.linear_model.orthogonal_mp_gram")
Solve OMP problems using Gram matrix and the product X.T \* y.
[`lars_path`](sklearn.linear_model.lars_path#sklearn.linear_model.lars_path "sklearn.linear_model.lars_path")
Compute Least Angle Regression or Lasso path using LARS algorithm.
[`sklearn.decomposition.sparse_encode`](sklearn.decomposition.sparse_encode#sklearn.decomposition.sparse_encode "sklearn.decomposition.sparse_encode")
Sparse coding.
#### Notes
Orthogonal matching pursuit was introduced in S. Mallat, Z. Zhang, Matching pursuits with time-frequency dictionaries, IEEE Transactions on Signal Processing, Vol. 41, No. 12. (December 1993), pp. 3397-3415. (<https://www.di.ens.fr/~mallat/papiers/MallatPursuit93.pdf>)
This implementation is based on Rubinstein, R., Zibulevsky, M. and Elad, M., Efficient Implementation of the K-SVD Algorithm using Batch Orthogonal Matching Pursuit Technical Report - CS Technion, April 2008. <https://www.cs.technion.ac.il/~ronrubin/Publications/KSVD-OMP-v2.pdf>
| programming_docs |
scikit_learn sklearn.multioutput.RegressorChain sklearn.multioutput.RegressorChain
==================================
*class*sklearn.multioutput.RegressorChain(*base\_estimator*, *\**, *order=None*, *cv=None*, *random\_state=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/multioutput.py#L842)
A multi-label model that arranges regressions into a chain.
Each model makes a prediction in the order specified by the chain using all of the available features provided to the model plus the predictions of models that are earlier in the chain.
Read more in the [User Guide](../multiclass#regressorchain).
New in version 0.20.
Parameters:
**base\_estimator**estimator
The base estimator from which the regressor chain is built.
**order**array-like of shape (n\_outputs,) or ‘random’, default=None
If `None`, the order will be determined by the order of columns in the label matrix Y.:
```
order = [0, 1, 2, ..., Y.shape[1] - 1]
```
The order of the chain can be explicitly set by providing a list of integers. For example, for a chain of length 5.:
```
order = [1, 3, 2, 4, 0]
```
means that the first model in the chain will make predictions for column 1 in the Y matrix, the second model will make predictions for column 3, etc.
If order is ‘random’ a random ordering will be used.
**cv**int, cross-validation generator or an iterable, default=None
Determines whether to use cross validated predictions or true labels for the results of previous estimators in the chain. Possible inputs for cv are:
* None, to use true labels when fitting,
* integer, to specify the number of folds in a (Stratified)KFold,
* [CV splitter](https://scikit-learn.org/1.1/glossary.html#term-CV-splitter),
* An iterable yielding (train, test) splits as arrays of indices.
**random\_state**int, RandomState instance or None, optional (default=None)
If `order='random'`, determines random number generation for the chain order. In addition, it controls the random seed given at each `base_estimator` at each chaining iteration. Thus, it is only used when `base_estimator` exposes a `random_state`. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
Attributes:
**estimators\_**list
A list of clones of base\_estimator.
**order\_**list
The order of labels in the classifier chain.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Only defined if the underlying `base_estimator` exposes such an attribute when fit.
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`ClassifierChain`](sklearn.multioutput.classifierchain#sklearn.multioutput.ClassifierChain "sklearn.multioutput.ClassifierChain")
Equivalent for classification.
[`MultiOutputRegressor`](sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor")
Learns each output independently rather than chaining.
#### Examples
```
>>> from sklearn.multioutput import RegressorChain
>>> from sklearn.linear_model import LogisticRegression
>>> logreg = LogisticRegression(solver='lbfgs',multi_class='multinomial')
>>> X, Y = [[1, 0], [0, 1], [1, 1]], [[0, 2], [1, 1], [2, 0]]
>>> chain = RegressorChain(base_estimator=logreg, order=[0, 1]).fit(X, Y)
>>> chain.predict(X)
array([[0., 2.],
[1., 1.],
[2., 0.]])
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.multioutput.RegressorChain.fit "sklearn.multioutput.RegressorChain.fit")(X, Y, \*\*fit\_params) | Fit the model to data matrix X and targets Y. |
| [`get_params`](#sklearn.multioutput.RegressorChain.get_params "sklearn.multioutput.RegressorChain.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.multioutput.RegressorChain.predict "sklearn.multioutput.RegressorChain.predict")(X) | Predict on the data matrix X using the ClassifierChain model. |
| [`score`](#sklearn.multioutput.RegressorChain.score "sklearn.multioutput.RegressorChain.score")(X, y[, sample\_weight]) | Return the coefficient of determination of the prediction. |
| [`set_params`](#sklearn.multioutput.RegressorChain.set_params "sklearn.multioutput.RegressorChain.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *Y*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/multioutput.py#L933)
Fit the model to data matrix X and targets Y.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The input data.
**Y**array-like of shape (n\_samples, n\_classes)
The target values.
**\*\*fit\_params**dict of string -> object
Parameters passed to the `fit` method at each step of the regressor chain.
New in version 0.23.
Returns:
**self**object
Returns a fitted instance.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/multioutput.py#L607)
Predict on the data matrix X using the ClassifierChain model.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The input data.
Returns:
**Y\_pred**array-like of shape (n\_samples, n\_classes)
The predicted values.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L677)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares `((y_true - y_pred)** 2).sum()` and \(v\) is the total sum of squares `((y_true - y_true.mean()) ** 2).sum()`. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of `y`, disregarding the input features, would get a \(R^2\) score of 0.0.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape `(n_samples, n_samples_fitted)`, where `n_samples_fitted` is the number of samples used in the fitting for the estimator.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True values for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
\(R^2\) of `self.predict(X)` wrt. `y`.
#### Notes
The \(R^2\) score used when calling `score` on a regressor uses `multioutput='uniform_average'` from version 0.23 to keep consistent with default value of [`r2_score`](sklearn.metrics.r2_score#sklearn.metrics.r2_score "sklearn.metrics.r2_score"). This influences the `score` method of all the multioutput regressors (except for [`MultiOutputRegressor`](sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor")).
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
scikit_learn sklearn.gaussian_process.kernels.DotProduct sklearn.gaussian\_process.kernels.DotProduct
============================================
*class*sklearn.gaussian\_process.kernels.DotProduct(*sigma\_0=1.0*, *sigma\_0\_bounds=(1e-05, 100000.0)*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L2074)
Dot-Product kernel.
The DotProduct kernel is non-stationary and can be obtained from linear regression by putting \(N(0, 1)\) priors on the coefficients of \(x\_d (d = 1, . . . , D)\) and a prior of \(N(0, \sigma\_0^2)\) on the bias. The DotProduct kernel is invariant to a rotation of the coordinates about the origin, but not translations. It is parameterized by a parameter sigma\_0 \(\sigma\) which controls the inhomogenity of the kernel. For \(\sigma\_0^2 =0\), the kernel is called the homogeneous linear kernel, otherwise it is inhomogeneous. The kernel is given by
\[k(x\_i, x\_j) = \sigma\_0 ^ 2 + x\_i \cdot x\_j\] The DotProduct kernel is commonly combined with exponentiation.
See [[1]](#r95f74c4622c1-1), Chapter 4, Section 4.2, for further details regarding the DotProduct kernel.
Read more in the [User Guide](../gaussian_process#gp-kernels).
New in version 0.18.
Parameters:
**sigma\_0**float >= 0, default=1.0
Parameter controlling the inhomogenity of the kernel. If sigma\_0=0, the kernel is homogeneous.
**sigma\_0\_bounds**pair of floats >= 0 or “fixed”, default=(1e-5, 1e5)
The lower and upper bound on ‘sigma\_0’. If set to “fixed”, ‘sigma\_0’ cannot be changed during hyperparameter tuning.
Attributes:
[`bounds`](#sklearn.gaussian_process.kernels.DotProduct.bounds "sklearn.gaussian_process.kernels.DotProduct.bounds")
Returns the log-transformed bounds on the theta.
**hyperparameter\_sigma\_0**
[`hyperparameters`](#sklearn.gaussian_process.kernels.DotProduct.hyperparameters "sklearn.gaussian_process.kernels.DotProduct.hyperparameters")
Returns a list of all hyperparameter specifications.
[`n_dims`](#sklearn.gaussian_process.kernels.DotProduct.n_dims "sklearn.gaussian_process.kernels.DotProduct.n_dims")
Returns the number of non-fixed hyperparameters of the kernel.
[`requires_vector_input`](#sklearn.gaussian_process.kernels.DotProduct.requires_vector_input "sklearn.gaussian_process.kernels.DotProduct.requires_vector_input")
Returns whether the kernel is defined on fixed-length feature vectors or generic objects.
[`theta`](#sklearn.gaussian_process.kernels.DotProduct.theta "sklearn.gaussian_process.kernels.DotProduct.theta")
Returns the (flattened, log-transformed) non-fixed hyperparameters.
#### References
[[1](#id1)] [Carl Edward Rasmussen, Christopher K. I. Williams (2006). “Gaussian Processes for Machine Learning”. The MIT Press.](http://www.gaussianprocess.org/gpml/)
#### Examples
```
>>> from sklearn.datasets import make_friedman2
>>> from sklearn.gaussian_process import GaussianProcessRegressor
>>> from sklearn.gaussian_process.kernels import DotProduct, WhiteKernel
>>> X, y = make_friedman2(n_samples=500, noise=0, random_state=0)
>>> kernel = DotProduct() + WhiteKernel()
>>> gpr = GaussianProcessRegressor(kernel=kernel,
... random_state=0).fit(X, y)
>>> gpr.score(X, y)
0.3680...
>>> gpr.predict(X[:2,:], return_std=True)
(array([653.0..., 592.1...]), array([316.6..., 316.6...]))
```
#### Methods
| | |
| --- | --- |
| [`__call__`](#sklearn.gaussian_process.kernels.DotProduct.__call__ "sklearn.gaussian_process.kernels.DotProduct.__call__")(X[, Y, eval\_gradient]) | Return the kernel k(X, Y) and optionally its gradient. |
| [`clone_with_theta`](#sklearn.gaussian_process.kernels.DotProduct.clone_with_theta "sklearn.gaussian_process.kernels.DotProduct.clone_with_theta")(theta) | Returns a clone of self with given hyperparameters theta. |
| [`diag`](#sklearn.gaussian_process.kernels.DotProduct.diag "sklearn.gaussian_process.kernels.DotProduct.diag")(X) | Returns the diagonal of the kernel k(X, X). |
| [`get_params`](#sklearn.gaussian_process.kernels.DotProduct.get_params "sklearn.gaussian_process.kernels.DotProduct.get_params")([deep]) | Get parameters of this kernel. |
| [`is_stationary`](#sklearn.gaussian_process.kernels.DotProduct.is_stationary "sklearn.gaussian_process.kernels.DotProduct.is_stationary")() | Returns whether the kernel is stationary. |
| [`set_params`](#sklearn.gaussian_process.kernels.DotProduct.set_params "sklearn.gaussian_process.kernels.DotProduct.set_params")(\*\*params) | Set the parameters of this kernel. |
\_\_call\_\_(*X*, *Y=None*, *eval\_gradient=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L2139)
Return the kernel k(X, Y) and optionally its gradient.
Parameters:
**X**ndarray of shape (n\_samples\_X, n\_features)
Left argument of the returned kernel k(X, Y)
**Y**ndarray of shape (n\_samples\_Y, n\_features), default=None
Right argument of the returned kernel k(X, Y). If None, k(X, X) if evaluated instead.
**eval\_gradient**bool, default=False
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Only supported when Y is None.
Returns:
**K**ndarray of shape (n\_samples\_X, n\_samples\_Y)
Kernel k(X, Y)
**K\_gradient**ndarray of shape (n\_samples\_X, n\_samples\_X, n\_dims), optional
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when `eval_gradient` is True.
*property*bounds
Returns the log-transformed bounds on the theta.
Returns:
**bounds**ndarray of shape (n\_dims, 2)
The log-transformed bounds on the kernel’s hyperparameters theta
clone\_with\_theta(*theta*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L238)
Returns a clone of self with given hyperparameters theta.
Parameters:
**theta**ndarray of shape (n\_dims,)
The hyperparameters
diag(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L2185)
Returns the diagonal of the kernel k(X, X).
The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated.
Parameters:
**X**ndarray of shape (n\_samples\_X, n\_features)
Left argument of the returned kernel k(X, Y).
Returns:
**K\_diag**ndarray of shape (n\_samples\_X,)
Diagonal of kernel k(X, X).
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L158)
Get parameters of this kernel.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
*property*hyperparameters
Returns a list of all hyperparameter specifications.
is\_stationary()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L2204)
Returns whether the kernel is stationary.
*property*n\_dims
Returns the number of non-fixed hyperparameters of the kernel.
*property*requires\_vector\_input
Returns whether the kernel is defined on fixed-length feature vectors or generic objects. Defaults to True for backward compatibility.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L198)
Set the parameters of this kernel.
The method works on simple kernels as well as on nested kernels. The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Returns:
self
*property*theta
Returns the (flattened, log-transformed) non-fixed hyperparameters.
Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale.
Returns:
**theta**ndarray of shape (n\_dims,)
The non-fixed, log-transformed hyperparameters of the kernel
Examples using `sklearn.gaussian_process.kernels.DotProduct`
------------------------------------------------------------
[Illustration of Gaussian process classification (GPC) on the XOR dataset](../../auto_examples/gaussian_process/plot_gpc_xor#sphx-glr-auto-examples-gaussian-process-plot-gpc-xor-py)
[Illustration of prior and posterior Gaussian process for different kernels](../../auto_examples/gaussian_process/plot_gpr_prior_posterior#sphx-glr-auto-examples-gaussian-process-plot-gpr-prior-posterior-py)
[Iso-probability lines for Gaussian Processes classification (GPC)](../../auto_examples/gaussian_process/plot_gpc_isoprobability#sphx-glr-auto-examples-gaussian-process-plot-gpc-isoprobability-py)
scikit_learn sklearn.utils.random.sample_without_replacement sklearn.utils.random.sample\_without\_replacement
=================================================
sklearn.utils.random.sample\_without\_replacement()
Sample integers without replacement.
Select n\_samples integers from the set [0, n\_population) without replacement.
Parameters:
**n\_population**int
The size of the set to sample from.
**n\_samples**int
The number of integer to sample.
**random\_state**int, RandomState instance or None, default=None
If int, random\_state is the seed used by the random number generator; If RandomState instance, random\_state is the random number generator; If None, the random number generator is the RandomState instance used by `np.random`.
**method**{“auto”, “tracking\_selection”, “reservoir\_sampling”, “pool”}, default=’auto’
If method == “auto”, the ratio of n\_samples / n\_population is used to determine which algorithm to use: If ratio is between 0 and 0.01, tracking selection is used. If ratio is between 0.01 and 0.99, numpy.random.permutation is used. If ratio is greater than 0.99, reservoir sampling is used. The order of the selected integers is undefined. If a random order is desired, the selected subset should be shuffled.
If method ==”tracking\_selection”, a set based implementation is used which is suitable for `n_samples` <<< `n_population`.
If method == “reservoir\_sampling”, a reservoir sampling algorithm is used which is suitable for high memory constraint or when O(`n_samples`) ~ O(`n_population`). The order of the selected integers is undefined. If a random order is desired, the selected subset should be shuffled.
If method == “pool”, a pool based algorithm is particularly fast, even faster than the tracking selection method. However, a vector containing the entire population has to be initialized. If n\_samples ~ n\_population, the reservoir sampling method is faster.
Returns:
**out**ndarray of shape (n\_samples,)
The sampled subsets of integer. The subset of selected integer might not be randomized, see the method argument.
scikit_learn sklearn.model_selection.RepeatedKFold sklearn.model\_selection.RepeatedKFold
======================================
*class*sklearn.model\_selection.RepeatedKFold(*\**, *n\_splits=5*, *n\_repeats=10*, *random\_state=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_split.py#L1466)
Repeated K-Fold cross validator.
Repeats K-Fold n times with different randomization in each repetition.
Read more in the [User Guide](../cross_validation#repeated-k-fold).
Parameters:
**n\_splits**int, default=5
Number of folds. Must be at least 2.
**n\_repeats**int, default=10
Number of times cross-validator needs to be repeated.
**random\_state**int, RandomState instance or None, default=None
Controls the randomness of each repeated cross-validation instance. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
See also
[`RepeatedStratifiedKFold`](sklearn.model_selection.repeatedstratifiedkfold#sklearn.model_selection.RepeatedStratifiedKFold "sklearn.model_selection.RepeatedStratifiedKFold")
Repeats Stratified K-Fold n times.
#### Notes
Randomized CV splitters may return different results for each call of split. You can make the results identical by setting `random_state` to an integer.
#### Examples
```
>>> import numpy as np
>>> from sklearn.model_selection import RepeatedKFold
>>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]])
>>> y = np.array([0, 0, 1, 1])
>>> rkf = RepeatedKFold(n_splits=2, n_repeats=2, random_state=2652124)
>>> for train_index, test_index in rkf.split(X):
... print("TRAIN:", train_index, "TEST:", test_index)
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
...
TRAIN: [0 1] TEST: [2 3]
TRAIN: [2 3] TEST: [0 1]
TRAIN: [1 2] TEST: [0 3]
TRAIN: [0 3] TEST: [1 2]
```
#### Methods
| | |
| --- | --- |
| [`get_n_splits`](#sklearn.model_selection.RepeatedKFold.get_n_splits "sklearn.model_selection.RepeatedKFold.get_n_splits")([X, y, groups]) | Returns the number of splitting iterations in the cross-validator |
| [`split`](#sklearn.model_selection.RepeatedKFold.split "sklearn.model_selection.RepeatedKFold.split")(X[, y, groups]) | Generates indices to split data into training and test set. |
get\_n\_splits(*X=None*, *y=None*, *groups=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_split.py#L1436)
Returns the number of splitting iterations in the cross-validator
Parameters:
**X**object
Always ignored, exists for compatibility. `np.zeros(n_samples)` may be used as a placeholder.
**y**object
Always ignored, exists for compatibility. `np.zeros(n_samples)` may be used as a placeholder.
**groups**array-like of shape (n\_samples,), default=None
Group labels for the samples used while splitting the dataset into train/test set.
Returns:
**n\_splits**int
Returns the number of splitting iterations in the cross-validator.
split(*X*, *y=None*, *groups=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_split.py#L1404)
Generates indices to split data into training and test set.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training data, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**array-like of shape (n\_samples,)
The target variable for supervised learning problems.
**groups**array-like of shape (n\_samples,), default=None
Group labels for the samples used while splitting the dataset into train/test set.
Yields:
**train**ndarray
The training set indices for that split.
**test**ndarray
The testing set indices for that split.
Examples using `sklearn.model_selection.RepeatedKFold`
------------------------------------------------------
[Common pitfalls in the interpretation of coefficients of linear models](../../auto_examples/inspection/plot_linear_model_coefficient_interpretation#sphx-glr-auto-examples-inspection-plot-linear-model-coefficient-interpretation-py)
| programming_docs |
scikit_learn sklearn.linear_model.ridge_regression sklearn.linear\_model.ridge\_regression
=======================================
sklearn.linear\_model.ridge\_regression(*X*, *y*, *alpha*, *\**, *sample\_weight=None*, *solver='auto'*, *max\_iter=None*, *tol=0.001*, *verbose=0*, *positive=False*, *random\_state=None*, *return\_n\_iter=False*, *return\_intercept=False*, *check\_input=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_ridge.py#L371)
Solve the ridge equation by the method of normal equations.
Read more in the [User Guide](../linear_model#ridge-regression).
Parameters:
**X**{ndarray, sparse matrix, LinearOperator} of shape (n\_samples, n\_features)
Training data.
**y**ndarray of shape (n\_samples,) or (n\_samples, n\_targets)
Target values.
**alpha**float or array-like of shape (n\_targets,)
Constant that multiplies the L2 term, controlling regularization strength. `alpha` must be a non-negative float i.e. in `[0, inf)`.
When `alpha = 0`, the objective is equivalent to ordinary least squares, solved by the [`LinearRegression`](sklearn.linear_model.linearregression#sklearn.linear_model.LinearRegression "sklearn.linear_model.LinearRegression") object. For numerical reasons, using `alpha = 0` with the `Ridge` object is not advised. Instead, you should use the [`LinearRegression`](sklearn.linear_model.linearregression#sklearn.linear_model.LinearRegression "sklearn.linear_model.LinearRegression") object.
If an array is passed, penalties are assumed to be specific to the targets. Hence they must correspond in number.
**sample\_weight**float or array-like of shape (n\_samples,), default=None
Individual weights for each sample. If given a float, every sample will have the same weight. If sample\_weight is not None and solver=’auto’, the solver will be set to ‘cholesky’.
New in version 0.17.
**solver**{‘auto’, ‘svd’, ‘cholesky’, ‘lsqr’, ‘sparse\_cg’, ‘sag’, ‘saga’, ‘lbfgs’}, default=’auto’
Solver to use in the computational routines:
* ‘auto’ chooses the solver automatically based on the type of data.
* ‘svd’ uses a Singular Value Decomposition of X to compute the Ridge coefficients. It is the most stable solver, in particular more stable for singular matrices than ‘cholesky’ at the cost of being slower.
* ‘cholesky’ uses the standard scipy.linalg.solve function to obtain a closed-form solution via a Cholesky decomposition of dot(X.T, X)
* ‘sparse\_cg’ uses the conjugate gradient solver as found in scipy.sparse.linalg.cg. As an iterative algorithm, this solver is more appropriate than ‘cholesky’ for large-scale data (possibility to set `tol` and `max_iter`).
* ‘lsqr’ uses the dedicated regularized least-squares routine scipy.sparse.linalg.lsqr. It is the fastest and uses an iterative procedure.
* ‘sag’ uses a Stochastic Average Gradient descent, and ‘saga’ uses its improved, unbiased version named SAGA. Both methods also use an iterative procedure, and are often faster than other solvers when both n\_samples and n\_features are large. Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing.
* ‘lbfgs’ uses L-BFGS-B algorithm implemented in `scipy.optimize.minimize`. It can be used only when `positive` is True.
All solvers except ‘svd’ support both dense and sparse data. However, only ‘lsqr’, ‘sag’, ‘sparse\_cg’, and ‘lbfgs’ support sparse input when `fit_intercept` is True.
New in version 0.17: Stochastic Average Gradient descent solver.
New in version 0.19: SAGA solver.
**max\_iter**int, default=None
Maximum number of iterations for conjugate gradient solver. For the ‘sparse\_cg’ and ‘lsqr’ solvers, the default value is determined by scipy.sparse.linalg. For ‘sag’ and saga solver, the default value is 1000. For ‘lbfgs’ solver, the default value is 15000.
**tol**float, default=1e-3
Precision of the solution.
**verbose**int, default=0
Verbosity level. Setting verbose > 0 will display additional information depending on the solver used.
**positive**bool, default=False
When set to `True`, forces the coefficients to be positive. Only ‘lbfgs’ solver is supported in this case.
**random\_state**int, RandomState instance, default=None
Used when `solver` == ‘sag’ or ‘saga’ to shuffle the data. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state) for details.
**return\_n\_iter**bool, default=False
If True, the method also returns `n_iter`, the actual number of iteration performed by the solver.
New in version 0.17.
**return\_intercept**bool, default=False
If True and if X is sparse, the method also returns the intercept, and the solver is automatically changed to ‘sag’. This is only a temporary fix for fitting the intercept with sparse data. For dense data, use sklearn.linear\_model.\_preprocess\_data before your regression.
New in version 0.17.
**check\_input**bool, default=True
If False, the input arrays X and y will not be checked.
New in version 0.21.
Returns:
**coef**ndarray of shape (n\_features,) or (n\_targets, n\_features)
Weight vector(s).
**n\_iter**int, optional
The actual number of iteration performed by the solver. Only returned if `return_n_iter` is True.
**intercept**float or ndarray of shape (n\_targets,)
The intercept of the model. Only returned if `return_intercept` is True and if X is a scipy sparse array.
#### Notes
This function won’t compute the intercept.
Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to `1 / (2C)` in other linear models such as [`LogisticRegression`](sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression "sklearn.linear_model.LogisticRegression") or [`LinearSVC`](sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC"). If an array is passed, penalties are assumed to be specific to the targets. Hence they must correspond in number.
scikit_learn sklearn.pipeline.make_pipeline sklearn.pipeline.make\_pipeline
===============================
sklearn.pipeline.make\_pipeline(*\*steps*, *memory=None*, *verbose=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/pipeline.py#L804)
Construct a [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline") from the given estimators.
This is a shorthand for the [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline") constructor; it does not require, and does not permit, naming the estimators. Instead, their names will be set to the lowercase of their types automatically.
Parameters:
**\*steps**list of Estimator objects
List of the scikit-learn estimators that are chained together.
**memory**str or object with the joblib.Memory interface, default=None
Used to cache the fitted transformers of the pipeline. By default, no caching is performed. If a string is given, it is the path to the caching directory. Enabling caching triggers a clone of the transformers before fitting. Therefore, the transformer instance given to the pipeline cannot be inspected directly. Use the attribute `named_steps` or `steps` to inspect estimators within the pipeline. Caching the transformers is advantageous when fitting is time consuming.
**verbose**bool, default=False
If True, the time elapsed while fitting each step will be printed as it is completed.
Returns:
**p**Pipeline
Returns a scikit-learn [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline") object.
See also
[`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")
Class for creating a pipeline of transforms with a final estimator.
#### Examples
```
>>> from sklearn.naive_bayes import GaussianNB
>>> from sklearn.preprocessing import StandardScaler
>>> from sklearn.pipeline import make_pipeline
>>> make_pipeline(StandardScaler(), GaussianNB(priors=None))
Pipeline(steps=[('standardscaler', StandardScaler()),
('gaussiannb', GaussianNB())])
```
Examples using `sklearn.pipeline.make_pipeline`
-----------------------------------------------
[Release Highlights for scikit-learn 1.1](../../auto_examples/release_highlights/plot_release_highlights_1_1_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-1-1-0-py)
[Release Highlights for scikit-learn 1.0](../../auto_examples/release_highlights/plot_release_highlights_1_0_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-1-0-0-py)
[Release Highlights for scikit-learn 0.24](../../auto_examples/release_highlights/plot_release_highlights_0_24_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-0-24-0-py)
[Release Highlights for scikit-learn 0.23](../../auto_examples/release_highlights/plot_release_highlights_0_23_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-0-23-0-py)
[Release Highlights for scikit-learn 0.22](../../auto_examples/release_highlights/plot_release_highlights_0_22_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-0-22-0-py)
[A demo of K-Means clustering on the handwritten digits data](../../auto_examples/cluster/plot_kmeans_digits#sphx-glr-auto-examples-cluster-plot-kmeans-digits-py)
[Principal Component Regression vs Partial Least Squares Regression](../../auto_examples/cross_decomposition/plot_pcr_vs_pls#sphx-glr-auto-examples-cross-decomposition-plot-pcr-vs-pls-py)
[Categorical Feature Support in Gradient Boosting](../../auto_examples/ensemble/plot_gradient_boosting_categorical#sphx-glr-auto-examples-ensemble-plot-gradient-boosting-categorical-py)
[Combine predictors using stacking](../../auto_examples/ensemble/plot_stack_predictors#sphx-glr-auto-examples-ensemble-plot-stack-predictors-py)
[Feature transformations with ensembles of trees](../../auto_examples/ensemble/plot_feature_transformation#sphx-glr-auto-examples-ensemble-plot-feature-transformation-py)
[Time-related feature engineering](../../auto_examples/applications/plot_cyclical_feature_engineering#sphx-glr-auto-examples-applications-plot-cyclical-feature-engineering-py)
[Pipeline ANOVA SVM](../../auto_examples/feature_selection/plot_feature_selection_pipeline#sphx-glr-auto-examples-feature-selection-plot-feature-selection-pipeline-py)
[Univariate Feature Selection](../../auto_examples/feature_selection/plot_feature_selection#sphx-glr-auto-examples-feature-selection-plot-feature-selection-py)
[Comparing Linear Bayesian Regressors](../../auto_examples/linear_model/plot_ard#sphx-glr-auto-examples-linear-model-plot-ard-py)
[Lasso model selection via information criteria](../../auto_examples/linear_model/plot_lasso_lars_ic#sphx-glr-auto-examples-linear-model-plot-lasso-lars-ic-py)
[Lasso model selection: AIC-BIC / cross-validation](../../auto_examples/linear_model/plot_lasso_model_selection#sphx-glr-auto-examples-linear-model-plot-lasso-model-selection-py)
[One-Class SVM versus One-Class SVM using Stochastic Gradient Descent](../../auto_examples/linear_model/plot_sgdocsvm_vs_ocsvm#sphx-glr-auto-examples-linear-model-plot-sgdocsvm-vs-ocsvm-py)
[Poisson regression and non-normal loss](../../auto_examples/linear_model/plot_poisson_regression_non_normal_loss#sphx-glr-auto-examples-linear-model-plot-poisson-regression-non-normal-loss-py)
[Polynomial and Spline interpolation](../../auto_examples/linear_model/plot_polynomial_interpolation#sphx-glr-auto-examples-linear-model-plot-polynomial-interpolation-py)
[Robust linear estimator fitting](../../auto_examples/linear_model/plot_robust_fit#sphx-glr-auto-examples-linear-model-plot-robust-fit-py)
[Tweedie regression on insurance claims](../../auto_examples/linear_model/plot_tweedie_regression_insurance_claims#sphx-glr-auto-examples-linear-model-plot-tweedie-regression-insurance-claims-py)
[Common pitfalls in the interpretation of coefficients of linear models](../../auto_examples/inspection/plot_linear_model_coefficient_interpretation#sphx-glr-auto-examples-inspection-plot-linear-model-coefficient-interpretation-py)
[Partial Dependence and Individual Conditional Expectation Plots](../../auto_examples/inspection/plot_partial_dependence#sphx-glr-auto-examples-inspection-plot-partial-dependence-py)
[Scalable learning with polynomial kernel approximation](../../auto_examples/kernel_approximation/plot_scalable_poly_kernels#sphx-glr-auto-examples-kernel-approximation-plot-scalable-poly-kernels-py)
[Manifold learning on handwritten digits: Locally Linear Embedding, Isomap…](../../auto_examples/manifold/plot_lle_digits#sphx-glr-auto-examples-manifold-plot-lle-digits-py)
[Advanced Plotting With Partial Dependence](../../auto_examples/miscellaneous/plot_partial_dependence_visualization_api#sphx-glr-auto-examples-miscellaneous-plot-partial-dependence-visualization-api-py)
[Comparing anomaly detection algorithms for outlier detection on toy datasets](../../auto_examples/miscellaneous/plot_anomaly_comparison#sphx-glr-auto-examples-miscellaneous-plot-anomaly-comparison-py)
[Displaying Pipelines](../../auto_examples/miscellaneous/plot_pipeline_display#sphx-glr-auto-examples-miscellaneous-plot-pipeline-display-py)
[Visualizations with Display Objects](../../auto_examples/miscellaneous/plot_display_object_visualization#sphx-glr-auto-examples-miscellaneous-plot-display-object-visualization-py)
[Imputing missing values before building an estimator](../../auto_examples/impute/plot_missing_values#sphx-glr-auto-examples-impute-plot-missing-values-py)
[Imputing missing values with variants of IterativeImputer](../../auto_examples/impute/plot_iterative_imputer_variants_comparison#sphx-glr-auto-examples-impute-plot-iterative-imputer-variants-comparison-py)
[Detection error tradeoff (DET) curve](../../auto_examples/model_selection/plot_det#sphx-glr-auto-examples-model-selection-plot-det-py)
[Precision-Recall](../../auto_examples/model_selection/plot_precision_recall#sphx-glr-auto-examples-model-selection-plot-precision-recall-py)
[Approximate nearest neighbors in TSNE](../../auto_examples/neighbors/approximate_nearest_neighbors#sphx-glr-auto-examples-neighbors-approximate-nearest-neighbors-py)
[Dimensionality Reduction with Neighborhood Components Analysis](../../auto_examples/neighbors/plot_nca_dim_reduction#sphx-glr-auto-examples-neighbors-plot-nca-dim-reduction-py)
[Varying regularization in Multi-layer Perceptron](../../auto_examples/neural_networks/plot_mlp_alpha#sphx-glr-auto-examples-neural-networks-plot-mlp-alpha-py)
[Feature discretization](../../auto_examples/preprocessing/plot_discretization_classification#sphx-glr-auto-examples-preprocessing-plot-discretization-classification-py)
[Importance of Feature Scaling](../../auto_examples/preprocessing/plot_scaling_importance#sphx-glr-auto-examples-preprocessing-plot-scaling-importance-py)
[Clustering text documents using k-means](../../auto_examples/text/plot_document_clustering#sphx-glr-auto-examples-text-plot-document-clustering-py)
scikit_learn sklearn.metrics.cluster.pair_confusion_matrix sklearn.metrics.cluster.pair\_confusion\_matrix
===============================================
sklearn.metrics.cluster.pair\_confusion\_matrix(*labels\_true*, *labels\_pred*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/cluster/_supervised.py#L161)
Pair confusion matrix arising from two clusterings [[1]](#r9ca8fd06d29a-1).
The pair confusion matrix \(C\) computes a 2 by 2 similarity matrix between two clusterings by considering all pairs of samples and counting pairs that are assigned into the same or into different clusters under the true and predicted clusterings.
Considering a pair of samples that is clustered together a positive pair, then as in binary classification the count of true negatives is \(C\_{00}\), false negatives is \(C\_{10}\), true positives is \(C\_{11}\) and false positives is \(C\_{01}\).
Read more in the [User Guide](../clustering#pair-confusion-matrix).
Parameters:
**labels\_true**array-like of shape (n\_samples,), dtype=integral
Ground truth class labels to be used as a reference.
**labels\_pred**array-like of shape (n\_samples,), dtype=integral
Cluster labels to evaluate.
Returns:
**C**ndarray of shape (2, 2), dtype=np.int64
The contingency matrix.
See also
`rand_score`
Rand Score.
`adjusted_rand_score`
Adjusted Rand Score.
`adjusted_mutual_info_score`
Adjusted Mutual Information.
#### References
[[1](#id1)] [Hubert, L., Arabie, P. “Comparing partitions.” Journal of Classification 2, 193–218 (1985).](https://doi.org/10.1007/BF01908075)
#### Examples
Perfectly matching labelings have all non-zero entries on the diagonal regardless of actual label values:
```
>>> from sklearn.metrics.cluster import pair_confusion_matrix
>>> pair_confusion_matrix([0, 0, 1, 1], [1, 1, 0, 0])
array([[8, 0],
[0, 4]]...
```
Labelings that assign all classes members to the same clusters are complete but may be not always pure, hence penalized, and have some off-diagonal non-zero entries:
```
>>> pair_confusion_matrix([0, 0, 1, 2], [0, 0, 1, 1])
array([[8, 2],
[0, 2]]...
```
Note that the matrix is not symmetric.
scikit_learn sklearn.metrics.r2_score sklearn.metrics.r2\_score
=========================
sklearn.metrics.r2\_score(*y\_true*, *y\_pred*, *\**, *sample\_weight=None*, *multioutput='uniform\_average'*, *force\_finite=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/_regression.py#L784)
\(R^2\) (coefficient of determination) regression score function.
Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). In the general case when the true y is non-constant, a constant model that always predicts the average y disregarding the input features would get a \(R^2\) score of 0.0.
In the particular case when `y_true` is constant, the \(R^2\) score is not finite: it is either `NaN` (perfect predictions) or `-Inf` (imperfect predictions). To prevent such non-finite numbers to pollute higher-level experiments such as a grid search cross-validation, by default these cases are replaced with 1.0 (perfect predictions) or 0.0 (imperfect predictions) respectively. You can set `force_finite` to `False` to prevent this fix from happening.
Note: when the prediction residuals have zero mean, the \(R^2\) score is identical to the [`Explained Variance score`](sklearn.metrics.explained_variance_score#sklearn.metrics.explained_variance_score "sklearn.metrics.explained_variance_score").
Read more in the [User Guide](../model_evaluation#r2-score).
Parameters:
**y\_true**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
Ground truth (correct) target values.
**y\_pred**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
Estimated target values.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
**multioutput**{‘raw\_values’, ‘uniform\_average’, ‘variance\_weighted’}, array-like of shape (n\_outputs,) or None, default=’uniform\_average’
Defines aggregating of multiple output scores. Array-like value defines weights used to average scores. Default is “uniform\_average”.
‘raw\_values’ :
Returns a full set of scores in case of multioutput input.
‘uniform\_average’ :
Scores of all outputs are averaged with uniform weight.
‘variance\_weighted’ :
Scores of all outputs are averaged, weighted by the variances of each individual output.
Changed in version 0.19: Default value of multioutput is ‘uniform\_average’.
**force\_finite**bool, default=True
Flag indicating if `NaN` and `-Inf` scores resulting from constant data should be replaced with real numbers (`1.0` if prediction is perfect, `0.0` otherwise). Default is `True`, a convenient setting for hyperparameters’ search procedures (e.g. grid search cross-validation).
New in version 1.1.
Returns:
**z**float or ndarray of floats
The \(R^2\) score or ndarray of scores if ‘multioutput’ is ‘raw\_values’.
#### Notes
This is not a symmetric function.
Unlike most other scores, \(R^2\) score may be negative (it need not actually be the square of a quantity R).
This metric is not well-defined for single samples and will return a NaN value if n\_samples is less than two.
#### References
[1] [Wikipedia entry on the Coefficient of determination](https://en.wikipedia.org/wiki/Coefficient_of_determination)
#### Examples
```
>>> from sklearn.metrics import r2_score
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> r2_score(y_true, y_pred)
0.948...
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> r2_score(y_true, y_pred,
... multioutput='variance_weighted')
0.938...
>>> y_true = [1, 2, 3]
>>> y_pred = [1, 2, 3]
>>> r2_score(y_true, y_pred)
1.0
>>> y_true = [1, 2, 3]
>>> y_pred = [2, 2, 2]
>>> r2_score(y_true, y_pred)
0.0
>>> y_true = [1, 2, 3]
>>> y_pred = [3, 2, 1]
>>> r2_score(y_true, y_pred)
-3.0
>>> y_true = [-2, -2, -2]
>>> y_pred = [-2, -2, -2]
>>> r2_score(y_true, y_pred)
1.0
>>> r2_score(y_true, y_pred, force_finite=False)
nan
>>> y_true = [-2, -2, -2]
>>> y_pred = [-2, -2, -2 + 1e-8]
>>> r2_score(y_true, y_pred)
0.0
>>> r2_score(y_true, y_pred, force_finite=False)
-inf
```
Examples using `sklearn.metrics.r2_score`
-----------------------------------------
[Lasso and Elastic Net for Sparse Signals](../../auto_examples/linear_model/plot_lasso_and_elasticnet#sphx-glr-auto-examples-linear-model-plot-lasso-and-elasticnet-py)
[Linear Regression Example](../../auto_examples/linear_model/plot_ols#sphx-glr-auto-examples-linear-model-plot-ols-py)
[Non-negative least squares](../../auto_examples/linear_model/plot_nnls#sphx-glr-auto-examples-linear-model-plot-nnls-py)
[Effect of transforming the targets in regression model](../../auto_examples/compose/plot_transformed_target#sphx-glr-auto-examples-compose-plot-transformed-target-py)
| programming_docs |
scikit_learn sklearn.base.DensityMixin sklearn.base.DensityMixin
=========================
*class*sklearn.base.DensityMixin[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L928)
Mixin class for all density estimators in scikit-learn.
#### Methods
| | |
| --- | --- |
| [`score`](#sklearn.base.DensityMixin.score "sklearn.base.DensityMixin.score")(X[, y]) | Return the score of the model on the data `X`. |
score(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L933)
Return the score of the model on the data `X`.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples.
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**score**float
scikit_learn sklearn.feature_selection.SelectorMixin sklearn.feature\_selection.SelectorMixin
========================================
*class*sklearn.feature\_selection.SelectorMixin[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_base.py#L24)
Transformer mixin that performs feature selection given a support mask
This mixin provides a feature selector implementation with `transform` and `inverse_transform` functionality given an implementation of `_get_support_mask`.
#### Methods
| | |
| --- | --- |
| [`fit_transform`](#sklearn.feature_selection.SelectorMixin.fit_transform "sklearn.feature_selection.SelectorMixin.fit_transform")(X[, y]) | Fit to data, then transform it. |
| [`get_feature_names_out`](#sklearn.feature_selection.SelectorMixin.get_feature_names_out "sklearn.feature_selection.SelectorMixin.get_feature_names_out")([input\_features]) | Mask feature names according to selected features. |
| [`get_support`](#sklearn.feature_selection.SelectorMixin.get_support "sklearn.feature_selection.SelectorMixin.get_support")([indices]) | Get a mask, or integer index, of the features selected. |
| [`inverse_transform`](#sklearn.feature_selection.SelectorMixin.inverse_transform "sklearn.feature_selection.SelectorMixin.inverse_transform")(X) | Reverse the transformation operation. |
| [`transform`](#sklearn.feature_selection.SelectorMixin.transform "sklearn.feature_selection.SelectorMixin.transform")(X) | Reduce X to the selected features. |
fit\_transform(*X*, *y=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L839)
Fit to data, then transform it.
Fits transformer to `X` and `y` with optional parameters `fit_params` and returns a transformed version of `X`.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs), default=None
Target values (None for unsupervised transformations).
**\*\*fit\_params**dict
Additional fit parameters.
Returns:
**X\_new**ndarray array of shape (n\_samples, n\_features\_new)
Transformed array.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_base.py#L146)
Mask feature names according to selected features.
Parameters:
**input\_features**array-like of str or None, default=None
Input features.
* If `input_features` is `None`, then `feature_names_in_` is used as feature names in. If `feature_names_in_` is not defined, then the following input feature names are generated: `["x0", "x1", ..., "x(n_features_in_ - 1)"]`.
* If `input_features` is an array-like, then `input_features` must match `feature_names_in_` if `feature_names_in_` is defined.
Returns:
**feature\_names\_out**ndarray of str objects
Transformed feature names.
get\_support(*indices=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_base.py#L33)
Get a mask, or integer index, of the features selected.
Parameters:
**indices**bool, default=False
If True, the return value will be an array of integers, rather than a boolean mask.
Returns:
**support**array
An index that selects the retained features from a feature vector. If `indices` is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If `indices` is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector.
inverse\_transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_base.py#L106)
Reverse the transformation operation.
Parameters:
**X**array of shape [n\_samples, n\_selected\_features]
The input samples.
Returns:
**X\_r**array of shape [n\_samples, n\_original\_features]
`X` with columns of zeros inserted where features would have been removed by [`transform`](#sklearn.feature_selection.SelectorMixin.transform "sklearn.feature_selection.SelectorMixin.transform").
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_base.py#L68)
Reduce X to the selected features.
Parameters:
**X**array of shape [n\_samples, n\_features]
The input samples.
Returns:
**X\_r**array of shape [n\_samples, n\_selected\_features]
The input samples with only the selected features.
scikit_learn sklearn.neighbors.RadiusNeighborsClassifier sklearn.neighbors.RadiusNeighborsClassifier
===========================================
*class*sklearn.neighbors.RadiusNeighborsClassifier(*radius=1.0*, *\**, *weights='uniform'*, *algorithm='auto'*, *leaf\_size=30*, *p=2*, *metric='minkowski'*, *outlier\_label=None*, *metric\_params=None*, *n\_jobs=None*, *\*\*kwargs*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_classification.py#L318)
Classifier implementing a vote among neighbors within a given radius.
Read more in the [User Guide](../neighbors#classification).
Parameters:
**radius**float, default=1.0
Range of parameter space to use by default for [`radius_neighbors`](#sklearn.neighbors.RadiusNeighborsClassifier.radius_neighbors "sklearn.neighbors.RadiusNeighborsClassifier.radius_neighbors") queries.
**weights**{‘uniform’, ‘distance’} or callable, default=’uniform’
Weight function used in prediction. Possible values:
* ‘uniform’ : uniform weights. All points in each neighborhood are weighted equally.
* ‘distance’ : weight points by the inverse of their distance. in this case, closer neighbors of a query point will have a greater influence than neighbors which are further away.
* [callable] : a user-defined function which accepts an array of distances, and returns an array of the same shape containing the weights.
Uniform weights are used by default.
**algorithm**{‘auto’, ‘ball\_tree’, ‘kd\_tree’, ‘brute’}, default=’auto’
Algorithm used to compute the nearest neighbors:
* ‘ball\_tree’ will use [`BallTree`](sklearn.neighbors.balltree#sklearn.neighbors.BallTree "sklearn.neighbors.BallTree")
* ‘kd\_tree’ will use [`KDTree`](sklearn.neighbors.kdtree#sklearn.neighbors.KDTree "sklearn.neighbors.KDTree")
* ‘brute’ will use a brute-force search.
* ‘auto’ will attempt to decide the most appropriate algorithm based on the values passed to [`fit`](#sklearn.neighbors.RadiusNeighborsClassifier.fit "sklearn.neighbors.RadiusNeighborsClassifier.fit") method.
Note: fitting on sparse input will override the setting of this parameter, using brute force.
**leaf\_size**int, default=30
Leaf size passed to BallTree or KDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem.
**p**int, default=2
Power parameter for the Minkowski metric. When p = 1, this is equivalent to using manhattan\_distance (l1), and euclidean\_distance (l2) for p = 2. For arbitrary p, minkowski\_distance (l\_p) is used.
**metric**str or callable, default=’minkowski’
Metric to use for distance computation. Default is “minkowski”, which results in the standard Euclidean distance when p = 2. See the documentation of [scipy.spatial.distance](https://docs.scipy.org/doc/scipy/reference/spatial.distance.html) and the metrics listed in [`distance_metrics`](sklearn.metrics.pairwise.distance_metrics#sklearn.metrics.pairwise.distance_metrics "sklearn.metrics.pairwise.distance_metrics") for valid metric values.
If metric is “precomputed”, X is assumed to be a distance matrix and must be square during fit. X may be a [sparse graph](https://scikit-learn.org/1.1/glossary.html#term-sparse-graph), in which case only “nonzero” elements may be considered neighbors.
If metric is a callable function, it takes two arrays representing 1D vectors as inputs and must return one value indicating the distance between those vectors. This works for Scipy’s metrics, but is less efficient than passing the metric name as a string.
**outlier\_label**{manual label, ‘most\_frequent’}, default=None
Label for outlier samples (samples with no neighbors in given radius).
* manual label: str or int label (should be the same type as y) or list of manual labels if multi-output is used.
* ‘most\_frequent’ : assign the most frequent label of y to outliers.
* None : when any outlier is detected, ValueError will be raised.
**metric\_params**dict, default=None
Additional keyword arguments for the metric function.
**n\_jobs**int, default=None
The number of parallel jobs to run for neighbors search. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
**\*\*kwargs**dict
Additional keyword arguments passed to the constructor.
Deprecated since version 1.0: The RadiusNeighborsClassifier class will not longer accept extra keyword parameters in 1.2 since they are unused.
Attributes:
**classes\_**ndarray of shape (n\_classes,)
Class labels known to the classifier.
**effective\_metric\_**str or callable
The distance metric used. It will be same as the `metric` parameter or a synonym of it, e.g. ‘euclidean’ if the `metric` parameter set to ‘minkowski’ and `p` parameter set to 2.
**effective\_metric\_params\_**dict
Additional keyword arguments for the metric function. For most metrics will be same with `metric_params` parameter, but may also contain the `p` parameter value if the `effective_metric_` attribute is set to ‘minkowski’.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
**n\_samples\_fit\_**int
Number of samples in the fitted data.
**outlier\_label\_**int or array-like of shape (n\_class,)
Label which is given for outlier samples (samples with no neighbors on given radius).
**outputs\_2d\_**bool
False when `y`’s shape is (n\_samples, ) or (n\_samples, 1) during fit otherwise True.
See also
[`KNeighborsClassifier`](sklearn.neighbors.kneighborsclassifier#sklearn.neighbors.KNeighborsClassifier "sklearn.neighbors.KNeighborsClassifier")
Classifier implementing the k-nearest neighbors vote.
[`RadiusNeighborsRegressor`](sklearn.neighbors.radiusneighborsregressor#sklearn.neighbors.RadiusNeighborsRegressor "sklearn.neighbors.RadiusNeighborsRegressor")
Regression based on neighbors within a fixed radius.
[`KNeighborsRegressor`](sklearn.neighbors.kneighborsregressor#sklearn.neighbors.KNeighborsRegressor "sklearn.neighbors.KNeighborsRegressor")
Regression based on k-nearest neighbors.
[`NearestNeighbors`](sklearn.neighbors.nearestneighbors#sklearn.neighbors.NearestNeighbors "sklearn.neighbors.NearestNeighbors")
Unsupervised learner for implementing neighbor searches.
#### Notes
See [Nearest Neighbors](../neighbors#neighbors) in the online documentation for a discussion of the choice of `algorithm` and `leaf_size`.
<https://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm>
#### Examples
```
>>> X = [[0], [1], [2], [3]]
>>> y = [0, 0, 1, 1]
>>> from sklearn.neighbors import RadiusNeighborsClassifier
>>> neigh = RadiusNeighborsClassifier(radius=1.0)
>>> neigh.fit(X, y)
RadiusNeighborsClassifier(...)
>>> print(neigh.predict([[1.5]]))
[0]
>>> print(neigh.predict_proba([[1.0]]))
[[0.66666667 0.33333333]]
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.neighbors.RadiusNeighborsClassifier.fit "sklearn.neighbors.RadiusNeighborsClassifier.fit")(X, y) | Fit the radius neighbors classifier from the training dataset. |
| [`get_params`](#sklearn.neighbors.RadiusNeighborsClassifier.get_params "sklearn.neighbors.RadiusNeighborsClassifier.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.neighbors.RadiusNeighborsClassifier.predict "sklearn.neighbors.RadiusNeighborsClassifier.predict")(X) | Predict the class labels for the provided data. |
| [`predict_proba`](#sklearn.neighbors.RadiusNeighborsClassifier.predict_proba "sklearn.neighbors.RadiusNeighborsClassifier.predict_proba")(X) | Return probability estimates for the test data X. |
| [`radius_neighbors`](#sklearn.neighbors.RadiusNeighborsClassifier.radius_neighbors "sklearn.neighbors.RadiusNeighborsClassifier.radius_neighbors")([X, radius, ...]) | Find the neighbors within a given radius of a point or points. |
| [`radius_neighbors_graph`](#sklearn.neighbors.RadiusNeighborsClassifier.radius_neighbors_graph "sklearn.neighbors.RadiusNeighborsClassifier.radius_neighbors_graph")([X, radius, mode, ...]) | Compute the (weighted) graph of Neighbors for points in X. |
| [`score`](#sklearn.neighbors.RadiusNeighborsClassifier.score "sklearn.neighbors.RadiusNeighborsClassifier.score")(X, y[, sample\_weight]) | Return the mean accuracy on the given test data and labels. |
| [`set_params`](#sklearn.neighbors.RadiusNeighborsClassifier.set_params "sklearn.neighbors.RadiusNeighborsClassifier.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_classification.py#L512)
Fit the radius neighbors classifier from the training dataset.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features) or (n\_samples, n\_samples) if metric=’precomputed’
Training data.
**y**{array-like, sparse matrix} of shape (n\_samples,) or (n\_samples, n\_outputs)
Target values.
Returns:
**self**RadiusNeighborsClassifier
The fitted radius neighbors classifier.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_classification.py#L585)
Predict the class labels for the provided data.
Parameters:
**X**array-like of shape (n\_queries, n\_features), or (n\_queries, n\_indexed) if metric == ‘precomputed’
Test samples.
Returns:
**y**ndarray of shape (n\_queries,) or (n\_queries, n\_outputs)
Class labels for each data sample.
predict\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_classification.py#L627)
Return probability estimates for the test data X.
Parameters:
**X**array-like of shape (n\_queries, n\_features), or (n\_queries, n\_indexed) if metric == ‘precomputed’
Test samples.
Returns:
**p**ndarray of shape (n\_queries, n\_classes), or a list of n\_outputs of such arrays if n\_outputs > 1.
The class probabilities of the input samples. Classes are ordered by lexicographic order.
radius\_neighbors(*X=None*, *radius=None*, *return\_distance=True*, *sort\_results=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_base.py#L996)
Find the neighbors within a given radius of a point or points.
Return the indices and distances of each point from the dataset lying in a ball with size `radius` around the points of the query array. Points lying on the boundary are included in the results.
The result points are *not* necessarily sorted by distance to their query point.
Parameters:
**X**array-like of (n\_samples, n\_features), default=None
The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor.
**radius**float, default=None
Limiting distance of neighbors to return. The default is the value passed to the constructor.
**return\_distance**bool, default=True
Whether or not to return the distances.
**sort\_results**bool, default=False
If True, the distances and indices will be sorted by increasing distances before being returned. If False, the results may not be sorted. If `return_distance=False`, setting `sort_results=True` will result in an error.
New in version 0.22.
Returns:
**neigh\_dist**ndarray of shape (n\_samples,) of arrays
Array representing the distances to each point, only present if `return_distance=True`. The distance values are computed according to the `metric` constructor parameter.
**neigh\_ind**ndarray of shape (n\_samples,) of arrays
An array of arrays of indices of the approximate nearest points from the population matrix that lie within a ball of size `radius` around the query points.
#### Notes
Because the number of neighbors of each point is not necessarily equal, the results for multiple query points cannot be fit in a standard data array. For efficiency, `radius_neighbors` returns arrays of objects, where each object is a 1D array of indices or distances.
#### Examples
In the following example, we construct a NeighborsClassifier class from an array representing our data set and ask who’s the closest point to [1, 1, 1]:
```
>>> import numpy as np
>>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(radius=1.6)
>>> neigh.fit(samples)
NearestNeighbors(radius=1.6)
>>> rng = neigh.radius_neighbors([[1., 1., 1.]])
>>> print(np.asarray(rng[0][0]))
[1.5 0.5]
>>> print(np.asarray(rng[1][0]))
[1 2]
```
The first array returned contains the distances to all points which are closer than 1.6, while the second array returned contains their indices. In general, multiple points can be queried at the same time.
radius\_neighbors\_graph(*X=None*, *radius=None*, *mode='connectivity'*, *sort\_results=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_base.py#L1205)
Compute the (weighted) graph of Neighbors for points in X.
Neighborhoods are restricted the points at a distance lower than radius.
Parameters:
**X**array-like of shape (n\_samples, n\_features), default=None
The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor.
**radius**float, default=None
Radius of neighborhoods. The default is the value passed to the constructor.
**mode**{‘connectivity’, ‘distance’}, default=’connectivity’
Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are distances between points, type of distance depends on the selected metric parameter in NearestNeighbors class.
**sort\_results**bool, default=False
If True, in each row of the result, the non-zero entries will be sorted by increasing distances. If False, the non-zero entries may not be sorted. Only used with mode=’distance’.
New in version 0.22.
Returns:
**A**sparse-matrix of shape (n\_queries, n\_samples\_fit)
`n_samples_fit` is the number of samples in the fitted data. `A[i, j]` gives the weight of the edge connecting `i` to `j`. The matrix is of CSR format.
See also
[`kneighbors_graph`](sklearn.neighbors.kneighbors_graph#sklearn.neighbors.kneighbors_graph "sklearn.neighbors.kneighbors_graph")
Compute the (weighted) graph of k-Neighbors for points in X.
#### Examples
```
>>> X = [[0], [3], [1]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(radius=1.5)
>>> neigh.fit(X)
NearestNeighbors(radius=1.5)
>>> A = neigh.radius_neighbors_graph(X)
>>> A.toarray()
array([[1., 0., 1.],
[0., 1., 0.],
[1., 0., 1.]])
```
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L640)
Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True labels for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
Mean accuracy of `self.predict(X)` wrt. `y`.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
| programming_docs |
scikit_learn sklearn.utils.sparsefuncs.inplace_column_scale sklearn.utils.sparsefuncs.inplace\_column\_scale
================================================
sklearn.utils.sparsefuncs.inplace\_column\_scale(*X*, *scale*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/utils/sparsefuncs.py#L223)
Inplace column scaling of a CSC/CSR matrix.
Scale each feature of the data matrix by multiplying with specific scale provided by the caller assuming a (n\_samples, n\_features) shape.
Parameters:
**X**sparse matrix of shape (n\_samples, n\_features)
Matrix to normalize using the variance of the features. It should be of CSC or CSR format.
**scale**ndarray of shape (n\_features,), dtype={np.float32, np.float64}
Array of precomputed feature-wise values to use for scaling.
scikit_learn sklearn.linear_model.RidgeClassifier sklearn.linear\_model.RidgeClassifier
=====================================
*class*sklearn.linear\_model.RidgeClassifier(*alpha=1.0*, *\**, *fit\_intercept=True*, *normalize='deprecated'*, *copy\_X=True*, *max\_iter=None*, *tol=0.001*, *class\_weight=None*, *solver='auto'*, *positive=False*, *random\_state=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_ridge.py#L1219)
Classifier using Ridge regression.
This classifier first converts the target values into `{-1, 1}` and then treats the problem as a regression task (multi-output regression in the multiclass case).
Read more in the [User Guide](../linear_model#ridge-regression).
Parameters:
**alpha**float, default=1.0
Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to `1 / (2C)` in other linear models such as [`LogisticRegression`](sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression "sklearn.linear_model.LogisticRegression") or [`LinearSVC`](sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC").
**fit\_intercept**bool, default=True
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (e.g. data is expected to be already centered).
**normalize**bool, default=False
This parameter is ignored when `fit_intercept` is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use [`StandardScaler`](sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler") before calling `fit` on an estimator with `normalize=False`.
Deprecated since version 1.0: `normalize` was deprecated in version 1.0 and will be removed in 1.2.
**copy\_X**bool, default=True
If True, X will be copied; else, it may be overwritten.
**max\_iter**int, default=None
Maximum number of iterations for conjugate gradient solver. The default value is determined by scipy.sparse.linalg.
**tol**float, default=1e-3
Precision of the solution.
**class\_weight**dict or ‘balanced’, default=None
Weights associated with classes in the form `{class_label: weight}`. If not given, all classes are supposed to have weight one.
The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as `n_samples / (n_classes * np.bincount(y))`.
**solver**{‘auto’, ‘svd’, ‘cholesky’, ‘lsqr’, ‘sparse\_cg’, ‘sag’, ‘saga’, ‘lbfgs’}, default=’auto’
Solver to use in the computational routines:
* ‘auto’ chooses the solver automatically based on the type of data.
* ‘svd’ uses a Singular Value Decomposition of X to compute the Ridge coefficients. It is the most stable solver, in particular more stable for singular matrices than ‘cholesky’ at the cost of being slower.
* ‘cholesky’ uses the standard scipy.linalg.solve function to obtain a closed-form solution.
* ‘sparse\_cg’ uses the conjugate gradient solver as found in scipy.sparse.linalg.cg. As an iterative algorithm, this solver is more appropriate than ‘cholesky’ for large-scale data (possibility to set `tol` and `max_iter`).
* ‘lsqr’ uses the dedicated regularized least-squares routine scipy.sparse.linalg.lsqr. It is the fastest and uses an iterative procedure.
* ‘sag’ uses a Stochastic Average Gradient descent, and ‘saga’ uses its unbiased and more flexible version named SAGA. Both methods use an iterative procedure, and are often faster than other solvers when both n\_samples and n\_features are large. Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing.
New in version 0.17: Stochastic Average Gradient descent solver.
New in version 0.19: SAGA solver.
* ‘lbfgs’ uses L-BFGS-B algorithm implemented in `scipy.optimize.minimize`. It can be used only when `positive` is True.
**positive**bool, default=False
When set to `True`, forces the coefficients to be positive. Only ‘lbfgs’ solver is supported in this case.
**random\_state**int, RandomState instance, default=None
Used when `solver` == ‘sag’ or ‘saga’ to shuffle the data. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state) for details.
Attributes:
**coef\_**ndarray of shape (1, n\_features) or (n\_classes, n\_features)
Coefficient of the features in the decision function.
`coef_` is of shape (1, n\_features) when the given problem is binary.
**intercept\_**float or ndarray of shape (n\_targets,)
Independent term in decision function. Set to 0.0 if `fit_intercept = False`.
**n\_iter\_**None or ndarray of shape (n\_targets,)
Actual number of iterations for each target. Available only for sag and lsqr solvers. Other solvers will return None.
[`classes_`](#sklearn.linear_model.RidgeClassifier.classes_ "sklearn.linear_model.RidgeClassifier.classes_")ndarray of shape (n\_classes,)
Classes labels.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`Ridge`](sklearn.linear_model.ridge#sklearn.linear_model.Ridge "sklearn.linear_model.Ridge")
Ridge regression.
[`RidgeClassifierCV`](sklearn.linear_model.ridgeclassifiercv#sklearn.linear_model.RidgeClassifierCV "sklearn.linear_model.RidgeClassifierCV")
Ridge classifier with built-in cross validation.
#### Notes
For multi-class classification, n\_class classifiers are trained in a one-versus-all approach. Concretely, this is implemented by taking advantage of the multi-variate response support in Ridge.
#### Examples
```
>>> from sklearn.datasets import load_breast_cancer
>>> from sklearn.linear_model import RidgeClassifier
>>> X, y = load_breast_cancer(return_X_y=True)
>>> clf = RidgeClassifier().fit(X, y)
>>> clf.score(X, y)
0.9595...
```
#### Methods
| | |
| --- | --- |
| [`decision_function`](#sklearn.linear_model.RidgeClassifier.decision_function "sklearn.linear_model.RidgeClassifier.decision_function")(X) | Predict confidence scores for samples. |
| [`fit`](#sklearn.linear_model.RidgeClassifier.fit "sklearn.linear_model.RidgeClassifier.fit")(X, y[, sample\_weight]) | Fit Ridge classifier model. |
| [`get_params`](#sklearn.linear_model.RidgeClassifier.get_params "sklearn.linear_model.RidgeClassifier.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.linear_model.RidgeClassifier.predict "sklearn.linear_model.RidgeClassifier.predict")(X) | Predict class labels for samples in `X`. |
| [`score`](#sklearn.linear_model.RidgeClassifier.score "sklearn.linear_model.RidgeClassifier.score")(X, y[, sample\_weight]) | Return the mean accuracy on the given test data and labels. |
| [`set_params`](#sklearn.linear_model.RidgeClassifier.set_params "sklearn.linear_model.RidgeClassifier.set_params")(\*\*params) | Set the parameters of this estimator. |
*property*classes\_
Classes labels.
decision\_function(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L408)
Predict confidence scores for samples.
The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The data matrix for which we want to get the confidence scores.
Returns:
**scores**ndarray of shape (n\_samples,) or (n\_samples, n\_classes)
Confidence scores per `(n_samples, n_classes)` combination. In the binary case, confidence score for `self.classes_[1]` where >0 means this class would be predicted.
fit(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_ridge.py#L1397)
Fit Ridge classifier model.
Parameters:
**X**{ndarray, sparse matrix} of shape (n\_samples, n\_features)
Training data.
**y**ndarray of shape (n\_samples,)
Target values.
**sample\_weight**float or ndarray of shape (n\_samples,), default=None
Individual weights for each sample. If given a float, every sample will have the same weight.
New in version 0.17: *sample\_weight* support to RidgeClassifier.
Returns:
**self**object
Instance of the estimator.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_ridge.py#L1185)
Predict class labels for samples in `X`.
Parameters:
**X**{array-like, spare matrix} of shape (n\_samples, n\_features)
The data matrix for which we want to predict the targets.
Returns:
**y\_pred**ndarray of shape (n\_samples,) or (n\_samples, n\_outputs)
Vector or matrix containing the predictions. In binary and multiclass problems, this is a vector containing `n_samples`. In a multilabel problem, it returns a matrix of shape `(n_samples, n_outputs)`.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L640)
Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True labels for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
Mean accuracy of `self.predict(X)` wrt. `y`.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.linear_model.RidgeClassifier`
-----------------------------------------------------
[Classification of text documents using sparse features](../../auto_examples/text/plot_document_classification_20newsgroups#sphx-glr-auto-examples-text-plot-document-classification-20newsgroups-py)
scikit_learn sklearn.pipeline.Pipeline sklearn.pipeline.Pipeline
=========================
*class*sklearn.pipeline.Pipeline(*steps*, *\**, *memory=None*, *verbose=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/pipeline.py#L52)
Pipeline of transforms with a final estimator.
Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be ‘transforms’, that is, they must implement `fit` and `transform` methods. The final estimator only needs to implement `fit`. The transformers in the pipeline can be cached using `memory` argument.
The purpose of the pipeline is to assemble several steps that can be cross-validated together while setting different parameters. For this, it enables setting parameters of the various steps using their names and the parameter name separated by a `'__'`, as in the example below. A step’s estimator may be replaced entirely by setting the parameter with its name to another estimator, or a transformer removed by setting it to `'passthrough'` or `None`.
Read more in the [User Guide](../compose#pipeline).
New in version 0.5.
Parameters:
**steps**list of tuple
List of (name, transform) tuples (implementing `fit`/`transform`) that are chained in sequential order. The last transform must be an estimator.
**memory**str or object with the joblib.Memory interface, default=None
Used to cache the fitted transformers of the pipeline. By default, no caching is performed. If a string is given, it is the path to the caching directory. Enabling caching triggers a clone of the transformers before fitting. Therefore, the transformer instance given to the pipeline cannot be inspected directly. Use the attribute `named_steps` or `steps` to inspect estimators within the pipeline. Caching the transformers is advantageous when fitting is time consuming.
**verbose**bool, default=False
If True, the time elapsed while fitting each step will be printed as it is completed.
Attributes:
[`named_steps`](#sklearn.pipeline.Pipeline.named_steps "sklearn.pipeline.Pipeline.named_steps")[`Bunch`](sklearn.utils.bunch#sklearn.utils.Bunch "sklearn.utils.Bunch")
Access the steps by name.
[`classes_`](#sklearn.pipeline.Pipeline.classes_ "sklearn.pipeline.Pipeline.classes_")ndarray of shape (n\_classes,)
The classes labels.
[`n_features_in_`](#sklearn.pipeline.Pipeline.n_features_in_ "sklearn.pipeline.Pipeline.n_features_in_")int
Number of features seen during first step `fit` method.
[`feature_names_in_`](#sklearn.pipeline.Pipeline.feature_names_in_ "sklearn.pipeline.Pipeline.feature_names_in_")ndarray of shape (`n_features_in_`,)
Names of features seen during first step `fit` method.
See also
[`make_pipeline`](sklearn.pipeline.make_pipeline#sklearn.pipeline.make_pipeline "sklearn.pipeline.make_pipeline")
Convenience function for simplified pipeline construction.
#### Examples
```
>>> from sklearn.svm import SVC
>>> from sklearn.preprocessing import StandardScaler
>>> from sklearn.datasets import make_classification
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.pipeline import Pipeline
>>> X, y = make_classification(random_state=0)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y,
... random_state=0)
>>> pipe = Pipeline([('scaler', StandardScaler()), ('svc', SVC())])
>>> # The pipeline can be used as any other estimator
>>> # and avoids leaking the test set into the train set
>>> pipe.fit(X_train, y_train)
Pipeline(steps=[('scaler', StandardScaler()), ('svc', SVC())])
>>> pipe.score(X_test, y_test)
0.88
```
#### Methods
| | |
| --- | --- |
| [`decision_function`](#sklearn.pipeline.Pipeline.decision_function "sklearn.pipeline.Pipeline.decision_function")(X) | Transform the data, and apply `decision_function` with the final estimator. |
| [`fit`](#sklearn.pipeline.Pipeline.fit "sklearn.pipeline.Pipeline.fit")(X[, y]) | Fit the model. |
| [`fit_predict`](#sklearn.pipeline.Pipeline.fit_predict "sklearn.pipeline.Pipeline.fit_predict")(X[, y]) | Transform the data, and apply `fit_predict` with the final estimator. |
| [`fit_transform`](#sklearn.pipeline.Pipeline.fit_transform "sklearn.pipeline.Pipeline.fit_transform")(X[, y]) | Fit the model and transform with the final estimator. |
| [`get_feature_names_out`](#sklearn.pipeline.Pipeline.get_feature_names_out "sklearn.pipeline.Pipeline.get_feature_names_out")([input\_features]) | Get output feature names for transformation. |
| [`get_params`](#sklearn.pipeline.Pipeline.get_params "sklearn.pipeline.Pipeline.get_params")([deep]) | Get parameters for this estimator. |
| [`inverse_transform`](#sklearn.pipeline.Pipeline.inverse_transform "sklearn.pipeline.Pipeline.inverse_transform")(Xt) | Apply `inverse_transform` for each step in a reverse order. |
| [`predict`](#sklearn.pipeline.Pipeline.predict "sklearn.pipeline.Pipeline.predict")(X, \*\*predict\_params) | Transform the data, and apply `predict` with the final estimator. |
| [`predict_log_proba`](#sklearn.pipeline.Pipeline.predict_log_proba "sklearn.pipeline.Pipeline.predict_log_proba")(X, \*\*predict\_log\_proba\_params) | Transform the data, and apply `predict_log_proba` with the final estimator. |
| [`predict_proba`](#sklearn.pipeline.Pipeline.predict_proba "sklearn.pipeline.Pipeline.predict_proba")(X, \*\*predict\_proba\_params) | Transform the data, and apply `predict_proba` with the final estimator. |
| [`score`](#sklearn.pipeline.Pipeline.score "sklearn.pipeline.Pipeline.score")(X[, y, sample\_weight]) | Transform the data, and apply `score` with the final estimator. |
| [`score_samples`](#sklearn.pipeline.Pipeline.score_samples "sklearn.pipeline.Pipeline.score_samples")(X) | Transform the data, and apply `score_samples` with the final estimator. |
| [`set_params`](#sklearn.pipeline.Pipeline.set_params "sklearn.pipeline.Pipeline.set_params")(\*\*kwargs) | Set the parameters of this estimator. |
| [`transform`](#sklearn.pipeline.Pipeline.transform "sklearn.pipeline.Pipeline.transform")(X) | Transform the data, and apply `transform` with the final estimator. |
*property*classes\_
The classes labels. Only exist if the last step is a classifier.
decision\_function(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/pipeline.py#L526)
Transform the data, and apply `decision_function` with the final estimator.
Call `transform` of each transformer in the pipeline. The transformed data are finally passed to the final estimator that calls `decision_function` method. Only valid if the final estimator implements `decision_function`.
Parameters:
**X**iterable
Data to predict on. Must fulfill input requirements of first step of the pipeline.
Returns:
**y\_score**ndarray of shape (n\_samples, n\_classes)
Result of calling `decision_function` on the final estimator.
*property*feature\_names\_in\_
Names of features seen during first step `fit` method.
fit(*X*, *y=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/pipeline.py#L351)
Fit the model.
Fit all the transformers one after the other and transform the data. Finally, fit the transformed data using the final estimator.
Parameters:
**X**iterable
Training data. Must fulfill input requirements of first step of the pipeline.
**y**iterable, default=None
Training targets. Must fulfill label requirements for all steps of the pipeline.
**\*\*fit\_params**dict of string -> object
Parameters passed to the `fit` method of each step, where each parameter name is prefixed such that parameter `p` for step `s` has key `s__p`.
Returns:
**self**object
Pipeline with fitted steps.
fit\_predict(*X*, *y=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/pipeline.py#L460)
Transform the data, and apply `fit_predict` with the final estimator.
Call `fit_transform` of each transformer in the pipeline. The transformed data are finally passed to the final estimator that calls `fit_predict` method. Only valid if the final estimator implements `fit_predict`.
Parameters:
**X**iterable
Training data. Must fulfill input requirements of first step of the pipeline.
**y**iterable, default=None
Training targets. Must fulfill label requirements for all steps of the pipeline.
**\*\*fit\_params**dict of string -> object
Parameters passed to the `fit` method of each step, where each parameter name is prefixed such that parameter `p` for step `s` has key `s__p`.
Returns:
**y\_pred**ndarray
Result of calling `fit_predict` on the final estimator.
fit\_transform(*X*, *y=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/pipeline.py#L386)
Fit the model and transform with the final estimator.
Fits all the transformers one after the other and transform the data. Then uses `fit_transform` on transformed data with the final estimator.
Parameters:
**X**iterable
Training data. Must fulfill input requirements of first step of the pipeline.
**y**iterable, default=None
Training targets. Must fulfill label requirements for all steps of the pipeline.
**\*\*fit\_params**dict of string -> object
Parameters passed to the `fit` method of each step, where each parameter name is prefixed such that parameter `p` for step `s` has key `s__p`.
Returns:
**Xt**ndarray of shape (n\_samples, n\_transformed\_features)
Transformed samples.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/pipeline.py#L710)
Get output feature names for transformation.
Transform input features using the pipeline.
Parameters:
**input\_features**array-like of str or None, default=None
Input features.
Returns:
**feature\_names\_out**ndarray of str objects
Transformed feature names.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/pipeline.py#L150)
Get parameters for this estimator.
Returns the parameters given in the constructor as well as the estimators contained within the `steps` of the `Pipeline`.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**mapping of string to any
Parameter names mapped to their values.
inverse\_transform(*Xt*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/pipeline.py#L641)
Apply `inverse_transform` for each step in a reverse order.
All estimators in the pipeline must support `inverse_transform`.
Parameters:
**Xt**array-like of shape (n\_samples, n\_transformed\_features)
Data samples, where `n_samples` is the number of samples and `n_features` is the number of features. Must fulfill input requirements of last step of pipeline’s `inverse_transform` method.
Returns:
**Xt**ndarray of shape (n\_samples, n\_features)
Inverse transformed data, that is, data in the original feature space.
*property*n\_features\_in\_
Number of features seen during first step `fit` method.
*property*named\_steps
Access the steps by name.
Read-only attribute to access any step by given name. Keys are steps names and values are the steps objects.
predict(*X*, *\*\*predict\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/pipeline.py#L426)
Transform the data, and apply `predict` with the final estimator.
Call `transform` of each transformer in the pipeline. The transformed data are finally passed to the final estimator that calls `predict` method. Only valid if the final estimator implements `predict`.
Parameters:
**X**iterable
Data to predict on. Must fulfill input requirements of first step of the pipeline.
**\*\*predict\_params**dict of string -> object
Parameters to the `predict` called at the end of all transformations in the pipeline. Note that while this may be used to return uncertainties from some models with return\_std or return\_cov, uncertainties that are generated by the transformations in the pipeline are not propagated to the final estimator.
New in version 0.20.
Returns:
**y\_pred**ndarray
Result of calling `predict` on the final estimator.
predict\_log\_proba(*X*, *\*\*predict\_log\_proba\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/pipeline.py#L576)
Transform the data, and apply `predict_log_proba` with the final estimator.
Call `transform` of each transformer in the pipeline. The transformed data are finally passed to the final estimator that calls `predict_log_proba` method. Only valid if the final estimator implements `predict_log_proba`.
Parameters:
**X**iterable
Data to predict on. Must fulfill input requirements of first step of the pipeline.
**\*\*predict\_log\_proba\_params**dict of string -> object
Parameters to the `predict_log_proba` called at the end of all transformations in the pipeline.
Returns:
**y\_log\_proba**ndarray of shape (n\_samples, n\_classes)
Result of calling `predict_log_proba` on the final estimator.
predict\_proba(*X*, *\*\*predict\_proba\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/pipeline.py#L497)
Transform the data, and apply `predict_proba` with the final estimator.
Call `transform` of each transformer in the pipeline. The transformed data are finally passed to the final estimator that calls `predict_proba` method. Only valid if the final estimator implements `predict_proba`.
Parameters:
**X**iterable
Data to predict on. Must fulfill input requirements of first step of the pipeline.
**\*\*predict\_proba\_params**dict of string -> object
Parameters to the `predict_proba` called at the end of all transformations in the pipeline.
Returns:
**y\_proba**ndarray of shape (n\_samples, n\_classes)
Result of calling `predict_proba` on the final estimator.
score(*X*, *y=None*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/pipeline.py#L666)
Transform the data, and apply `score` with the final estimator.
Call `transform` of each transformer in the pipeline. The transformed data are finally passed to the final estimator that calls `score` method. Only valid if the final estimator implements `score`.
Parameters:
**X**iterable
Data to predict on. Must fulfill input requirements of first step of the pipeline.
**y**iterable, default=None
Targets used for scoring. Must fulfill label requirements for all steps of the pipeline.
**sample\_weight**array-like, default=None
If not None, this argument is passed as `sample_weight` keyword argument to the `score` method of the final estimator.
Returns:
**score**float
Result of calling `score` on the final estimator.
score\_samples(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/pipeline.py#L551)
Transform the data, and apply `score_samples` with the final estimator.
Call `transform` of each transformer in the pipeline. The transformed data are finally passed to the final estimator that calls `score_samples` method. Only valid if the final estimator implements `score_samples`.
Parameters:
**X**iterable
Data to predict on. Must fulfill input requirements of first step of the pipeline.
Returns:
**y\_score**ndarray of shape (n\_samples,)
Result of calling `score_samples` on the final estimator.
set\_params(*\*\*kwargs*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/pipeline.py#L169)
Set the parameters of this estimator.
Valid parameter keys can be listed with `get_params()`. Note that you can directly set the parameters of the estimators contained in `steps`.
Parameters:
**\*\*kwargs**dict
Parameters of this estimator or parameters of estimators contained in `steps`. Parameters of the steps may be set using its name and the parameter name separated by a ‘\_\_’.
Returns:
**self**object
Pipeline class instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/pipeline.py#L610)
Transform the data, and apply `transform` with the final estimator.
Call `transform` of each transformer in the pipeline. The transformed data are finally passed to the final estimator that calls `transform` method. Only valid if the final estimator implements `transform`.
This also works where final estimator is `None` in which case all prior transformations are applied.
Parameters:
**X**iterable
Data to transform. Must fulfill input requirements of first step of the pipeline.
Returns:
**Xt**ndarray of shape (n\_samples, n\_transformed\_features)
Transformed data.
Examples using `sklearn.pipeline.Pipeline`
------------------------------------------
[Feature agglomeration vs. univariate selection](../../auto_examples/cluster/plot_feature_agglomeration_vs_univariate_selection#sphx-glr-auto-examples-cluster-plot-feature-agglomeration-vs-univariate-selection-py)
[Pipeline ANOVA SVM](../../auto_examples/feature_selection/plot_feature_selection_pipeline#sphx-glr-auto-examples-feature-selection-plot-feature-selection-pipeline-py)
[Poisson regression and non-normal loss](../../auto_examples/linear_model/plot_poisson_regression_non_normal_loss#sphx-glr-auto-examples-linear-model-plot-poisson-regression-non-normal-loss-py)
[Permutation Importance vs Random Forest Feature Importance (MDI)](../../auto_examples/inspection/plot_permutation_importance#sphx-glr-auto-examples-inspection-plot-permutation-importance-py)
[Displaying Pipelines](../../auto_examples/miscellaneous/plot_pipeline_display#sphx-glr-auto-examples-miscellaneous-plot-pipeline-display-py)
[Explicit feature map approximation for RBF kernels](../../auto_examples/miscellaneous/plot_kernel_approximation#sphx-glr-auto-examples-miscellaneous-plot-kernel-approximation-py)
[Balance model complexity and cross-validated score](../../auto_examples/model_selection/plot_grid_search_refit_callable#sphx-glr-auto-examples-model-selection-plot-grid-search-refit-callable-py)
[Sample pipeline for text feature extraction and evaluation](../../auto_examples/model_selection/grid_search_text_feature_extraction#sphx-glr-auto-examples-model-selection-grid-search-text-feature-extraction-py)
[Underfitting vs. Overfitting](../../auto_examples/model_selection/plot_underfitting_overfitting#sphx-glr-auto-examples-model-selection-plot-underfitting-overfitting-py)
[Caching nearest neighbors](../../auto_examples/neighbors/plot_caching_nearest_neighbors#sphx-glr-auto-examples-neighbors-plot-caching-nearest-neighbors-py)
[Comparing Nearest Neighbors with and without Neighborhood Components Analysis](../../auto_examples/neighbors/plot_nca_classification#sphx-glr-auto-examples-neighbors-plot-nca-classification-py)
[Restricted Boltzmann Machine features for digit classification](../../auto_examples/neural_networks/plot_rbm_logistic_classification#sphx-glr-auto-examples-neural-networks-plot-rbm-logistic-classification-py)
[Column Transformer with Heterogeneous Data Sources](../../auto_examples/compose/plot_column_transformer#sphx-glr-auto-examples-compose-plot-column-transformer-py)
[Column Transformer with Mixed Types](../../auto_examples/compose/plot_column_transformer_mixed_types#sphx-glr-auto-examples-compose-plot-column-transformer-mixed-types-py)
[Concatenating multiple feature extraction methods](../../auto_examples/compose/plot_feature_union#sphx-glr-auto-examples-compose-plot-feature-union-py)
[Pipelining: chaining a PCA and a logistic regression](../../auto_examples/compose/plot_digits_pipe#sphx-glr-auto-examples-compose-plot-digits-pipe-py)
[Selecting dimensionality reduction with Pipeline and GridSearchCV](../../auto_examples/compose/plot_compare_reduction#sphx-glr-auto-examples-compose-plot-compare-reduction-py)
[Semi-supervised Classification on a Text Dataset](../../auto_examples/semi_supervised/plot_semi_supervised_newsgroups#sphx-glr-auto-examples-semi-supervised-plot-semi-supervised-newsgroups-py)
[SVM-Anova: SVM with univariate feature selection](../../auto_examples/svm/plot_svm_anova#sphx-glr-auto-examples-svm-plot-svm-anova-py)
| programming_docs |
scikit_learn sklearn.cluster.SpectralClustering sklearn.cluster.SpectralClustering
==================================
*class*sklearn.cluster.SpectralClustering(*n\_clusters=8*, *\**, *eigen\_solver=None*, *n\_components=None*, *random\_state=None*, *n\_init=10*, *gamma=1.0*, *affinity='rbf'*, *n\_neighbors=10*, *eigen\_tol=0.0*, *assign\_labels='kmeans'*, *degree=3*, *coef0=1*, *kernel\_params=None*, *n\_jobs=None*, *verbose=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_spectral.py#L379)
Apply clustering to a projection of the normalized Laplacian.
In practice Spectral Clustering is very useful when the structure of the individual clusters is highly non-convex, or more generally when a measure of the center and spread of the cluster is not a suitable description of the complete cluster, such as when clusters are nested circles on the 2D plane.
If the affinity matrix is the adjacency matrix of a graph, this method can be used to find normalized graph cuts [[1]](#r5f6cbeb1558e-1), [[2]](#r5f6cbeb1558e-2).
When calling `fit`, an affinity matrix is constructed using either a kernel function such the Gaussian (aka RBF) kernel with Euclidean distance `d(X, X)`:
```
np.exp(-gamma * d(X,X) ** 2)
```
or a k-nearest neighbors connectivity matrix.
Alternatively, a user-provided affinity matrix can be specified by setting `affinity='precomputed'`.
Read more in the [User Guide](../clustering#spectral-clustering).
Parameters:
**n\_clusters**int, default=8
The dimension of the projection subspace.
**eigen\_solver**{‘arpack’, ‘lobpcg’, ‘amg’}, default=None
The eigenvalue decomposition strategy to use. AMG requires pyamg to be installed. It can be faster on very large, sparse problems, but may also lead to instabilities. If None, then `'arpack'` is used. See [[4]](#r5f6cbeb1558e-4) for more details regarding `'lobpcg'`.
**n\_components**int, default=n\_clusters
Number of eigenvectors to use for the spectral embedding.
**random\_state**int, RandomState instance, default=None
A pseudo random number generator used for the initialization of the lobpcg eigenvectors decomposition when `eigen_solver ==
'amg'`, and for the K-Means initialization. Use an int to make the results deterministic across calls (See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state)).
Note
When using `eigen_solver == 'amg'`, it is necessary to also fix the global numpy seed with `np.random.seed(int)` to get deterministic results. See <https://github.com/pyamg/pyamg/issues/139> for further information.
**n\_init**int, default=10
Number of time the k-means algorithm will be run with different centroid seeds. The final results will be the best output of n\_init consecutive runs in terms of inertia. Only used if `assign_labels='kmeans'`.
**gamma**float, default=1.0
Kernel coefficient for rbf, poly, sigmoid, laplacian and chi2 kernels. Ignored for `affinity='nearest_neighbors'`.
**affinity**str or callable, default=’rbf’
How to construct the affinity matrix.
* ‘nearest\_neighbors’: construct the affinity matrix by computing a graph of nearest neighbors.
* ‘rbf’: construct the affinity matrix using a radial basis function (RBF) kernel.
* ‘precomputed’: interpret `X` as a precomputed affinity matrix, where larger values indicate greater similarity between instances.
* ‘precomputed\_nearest\_neighbors’: interpret `X` as a sparse graph of precomputed distances, and construct a binary affinity matrix from the `n_neighbors` nearest neighbors of each instance.
* one of the kernels supported by `pairwise_kernels`.
Only kernels that produce similarity scores (non-negative values that increase with similarity) should be used. This property is not checked by the clustering algorithm.
**n\_neighbors**int, default=10
Number of neighbors to use when constructing the affinity matrix using the nearest neighbors method. Ignored for `affinity='rbf'`.
**eigen\_tol**float, default=0.0
Stopping criterion for eigendecomposition of the Laplacian matrix when `eigen_solver='arpack'`.
**assign\_labels**{‘kmeans’, ‘discretize’, ‘cluster\_qr’}, default=’kmeans’
The strategy for assigning labels in the embedding space. There are two ways to assign labels after the Laplacian embedding. k-means is a popular choice, but it can be sensitive to initialization. Discretization is another approach which is less sensitive to random initialization [[3]](#r5f6cbeb1558e-3). The cluster\_qr method [[5]](#r5f6cbeb1558e-5) directly extract clusters from eigenvectors in spectral clustering. In contrast to k-means and discretization, cluster\_qr has no tuning parameters and runs no iterations, yet may outperform k-means and discretization in terms of both quality and speed.
Changed in version 1.1: Added new labeling method ‘cluster\_qr’.
**degree**float, default=3
Degree of the polynomial kernel. Ignored by other kernels.
**coef0**float, default=1
Zero coefficient for polynomial and sigmoid kernels. Ignored by other kernels.
**kernel\_params**dict of str to any, default=None
Parameters (keyword arguments) and values for kernel passed as callable object. Ignored by other kernels.
**n\_jobs**int, default=None
The number of parallel jobs to run when `affinity='nearest_neighbors'` or `affinity='precomputed_nearest_neighbors'`. The neighbors search will be done in parallel. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
**verbose**bool, default=False
Verbosity mode.
New in version 0.24.
Attributes:
**affinity\_matrix\_**array-like of shape (n\_samples, n\_samples)
Affinity matrix used for clustering. Available only after calling `fit`.
**labels\_**ndarray of shape (n\_samples,)
Labels of each point
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`sklearn.cluster.KMeans`](sklearn.cluster.kmeans#sklearn.cluster.KMeans "sklearn.cluster.KMeans")
K-Means clustering.
[`sklearn.cluster.DBSCAN`](sklearn.cluster.dbscan#sklearn.cluster.DBSCAN "sklearn.cluster.DBSCAN")
Density-Based Spatial Clustering of Applications with Noise.
#### Notes
A distance matrix for which 0 indicates identical elements and high values indicate very dissimilar elements can be transformed into an affinity / similarity matrix that is well-suited for the algorithm by applying the Gaussian (aka RBF, heat) kernel:
```
np.exp(- dist_matrix ** 2 / (2. * delta ** 2))
```
where `delta` is a free parameter representing the width of the Gaussian kernel.
An alternative is to take a symmetric version of the k-nearest neighbors connectivity matrix of the points.
If the pyamg package is installed, it is used: this greatly speeds up computation.
#### References
[[1](#id1)] [Normalized cuts and image segmentation, 2000 Jianbo Shi, Jitendra Malik](https://doi.org/10.1109/34.868688)
[[2](#id2)] [A Tutorial on Spectral Clustering, 2007 Ulrike von Luxburg](https://doi.org/10.1007/s11222-007-9033-z)
[[3](#id4)] [Multiclass spectral clustering, 2003 Stella X. Yu, Jianbo Shi](https://www1.icsi.berkeley.edu/~stellayu/publication/doc/2003kwayICCV.pdf)
[[4](#id3)] [Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block Preconditioned Conjugate Gradient Method, 2001 A. V. Knyazev SIAM Journal on Scientific Computing 23, no. 2, pp. 517-541.](https://doi.org/10.1137/S1064827500366124)
[[5](#id5)] [Simple, direct, and efficient multi-way spectral clustering, 2019 Anil Damle, Victor Minden, Lexing Ying](https://doi.org/10.1093/imaiai/iay008)
#### Examples
```
>>> from sklearn.cluster import SpectralClustering
>>> import numpy as np
>>> X = np.array([[1, 1], [2, 1], [1, 0],
... [4, 7], [3, 5], [3, 6]])
>>> clustering = SpectralClustering(n_clusters=2,
... assign_labels='discretize',
... random_state=0).fit(X)
>>> clustering.labels_
array([1, 1, 1, 0, 0, 0])
>>> clustering
SpectralClustering(assign_labels='discretize', n_clusters=2,
random_state=0)
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.cluster.SpectralClustering.fit "sklearn.cluster.SpectralClustering.fit")(X[, y]) | Perform spectral clustering from features, or affinity matrix. |
| [`fit_predict`](#sklearn.cluster.SpectralClustering.fit_predict "sklearn.cluster.SpectralClustering.fit_predict")(X[, y]) | Perform spectral clustering on `X` and return cluster labels. |
| [`get_params`](#sklearn.cluster.SpectralClustering.get_params "sklearn.cluster.SpectralClustering.get_params")([deep]) | Get parameters for this estimator. |
| [`set_params`](#sklearn.cluster.SpectralClustering.set_params "sklearn.cluster.SpectralClustering.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_spectral.py#L625)
Perform spectral clustering from features, or affinity matrix.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features) or (n\_samples, n\_samples)
Training instances to cluster, similarities / affinities between instances if `affinity='precomputed'`, or distances between instances if `affinity='precomputed_nearest_neighbors`. If a sparse matrix is provided in a format other than `csr_matrix`, `csc_matrix`, or `coo_matrix`, it will be converted into a sparse `csr_matrix`.
**y**Ignored
Not used, present here for API consistency by convention.
Returns:
**self**object
A fitted instance of the estimator.
fit\_predict(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_spectral.py#L753)
Perform spectral clustering on `X` and return cluster labels.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features) or (n\_samples, n\_samples)
Training instances to cluster, similarities / affinities between instances if `affinity='precomputed'`, or distances between instances if `affinity='precomputed_nearest_neighbors`. If a sparse matrix is provided in a format other than `csr_matrix`, `csc_matrix`, or `coo_matrix`, it will be converted into a sparse `csr_matrix`.
**y**Ignored
Not used, present here for API consistency by convention.
Returns:
**labels**ndarray of shape (n\_samples,)
Cluster labels.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.cluster.SpectralClustering`
---------------------------------------------------
[Comparing different clustering algorithms on toy datasets](../../auto_examples/cluster/plot_cluster_comparison#sphx-glr-auto-examples-cluster-plot-cluster-comparison-py)
scikit_learn sklearn.linear_model.LassoLars sklearn.linear\_model.LassoLars
===============================
*class*sklearn.linear\_model.LassoLars(*alpha=1.0*, *\**, *fit\_intercept=True*, *verbose=False*, *normalize='deprecated'*, *precompute='auto'*, *max\_iter=500*, *eps=2.220446049250313e-16*, *copy\_X=True*, *fit\_path=True*, *positive=False*, *jitter=None*, *random\_state=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_least_angle.py#L1141)
Lasso model fit with Least Angle Regression a.k.a. Lars.
It is a Linear Model trained with an L1 prior as regularizer.
The optimization objective for Lasso is:
```
(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
```
Read more in the [User Guide](../linear_model#least-angle-regression).
Parameters:
**alpha**float, default=1.0
Constant that multiplies the penalty term. Defaults to 1.0. `alpha = 0` is equivalent to an ordinary least square, solved by [`LinearRegression`](sklearn.linear_model.linearregression#sklearn.linear_model.LinearRegression "sklearn.linear_model.LinearRegression"). For numerical reasons, using `alpha = 0` with the LassoLars object is not advised and you should prefer the LinearRegression object.
**fit\_intercept**bool, default=True
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
**verbose**bool or int, default=False
Sets the verbosity amount.
**normalize**bool, default=True
This parameter is ignored when `fit_intercept` is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use [`StandardScaler`](sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler") before calling `fit` on an estimator with `normalize=False`.
Deprecated since version 1.0: `normalize` was deprecated in version 1.0. It will default to False in 1.2 and be removed in 1.4.
**precompute**bool, ‘auto’ or array-like, default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to `'auto'` let us decide. The Gram matrix can also be passed as argument.
**max\_iter**int, default=500
Maximum number of iterations to perform.
**eps**float, default=np.finfo(float).eps
The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the `tol` parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.
**copy\_X**bool, default=True
If True, X will be copied; else, it may be overwritten.
**fit\_path**bool, default=True
If `True` the full path is stored in the `coef_path_` attribute. If you compute the solution for a large problem or many targets, setting `fit_path` to `False` will lead to a speedup, especially with a small alpha.
**positive**bool, default=False
Restrict coefficients to be >= 0. Be aware that you might want to remove fit\_intercept which is set True by default. Under the positive restriction the model coefficients will not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (`alphas_[alphas_ >
0.].min()` when fit\_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent Lasso estimator.
**jitter**float, default=None
Upper bound on a uniform noise parameter to be added to the `y` values, to satisfy the model’s assumption of one-at-a-time computations. Might help with stability.
New in version 0.23.
**random\_state**int, RandomState instance or None, default=None
Determines random number generation for jittering. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state). Ignored if `jitter` is None.
New in version 0.23.
Attributes:
**alphas\_**array-like of shape (n\_alphas + 1,) or list of such arrays
Maximum of covariances (in absolute value) at each iteration. `n_alphas` is either `max_iter`, `n_features` or the number of nodes in the path with `alpha >= alpha_min`, whichever is smaller. If this is a list of array-like, the length of the outer list is `n_targets`.
**active\_**list of length n\_alphas or list of such lists
Indices of active variables at the end of the path. If this is a list of list, the length of the outer list is `n_targets`.
**coef\_path\_**array-like of shape (n\_features, n\_alphas + 1) or list of such arrays
If a list is passed it’s expected to be one of n\_targets such arrays. The varying values of the coefficients along the path. It is not present if the `fit_path` parameter is `False`. If this is a list of array-like, the length of the outer list is `n_targets`.
**coef\_**array-like of shape (n\_features,) or (n\_targets, n\_features)
Parameter vector (w in the formulation formula).
**intercept\_**float or array-like of shape (n\_targets,)
Independent term in decision function.
**n\_iter\_**array-like or int
The number of iterations taken by lars\_path to find the grid of alphas for each target.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`lars_path`](sklearn.linear_model.lars_path#sklearn.linear_model.lars_path "sklearn.linear_model.lars_path")
Compute Least Angle Regression or Lasso path using LARS algorithm.
[`lasso_path`](sklearn.linear_model.lasso_path#sklearn.linear_model.lasso_path "sklearn.linear_model.lasso_path")
Compute Lasso path with coordinate descent.
[`Lasso`](sklearn.linear_model.lasso#sklearn.linear_model.Lasso "sklearn.linear_model.Lasso")
Linear Model trained with L1 prior as regularizer (aka the Lasso).
[`LassoCV`](sklearn.linear_model.lassocv#sklearn.linear_model.LassoCV "sklearn.linear_model.LassoCV")
Lasso linear model with iterative fitting along a regularization path.
[`LassoLarsCV`](sklearn.linear_model.lassolarscv#sklearn.linear_model.LassoLarsCV "sklearn.linear_model.LassoLarsCV")
Cross-validated Lasso, using the LARS algorithm.
[`LassoLarsIC`](sklearn.linear_model.lassolarsic#sklearn.linear_model.LassoLarsIC "sklearn.linear_model.LassoLarsIC")
Lasso model fit with Lars using BIC or AIC for model selection.
[`sklearn.decomposition.sparse_encode`](sklearn.decomposition.sparse_encode#sklearn.decomposition.sparse_encode "sklearn.decomposition.sparse_encode")
Sparse coding.
#### Examples
```
>>> from sklearn import linear_model
>>> reg = linear_model.LassoLars(alpha=0.01, normalize=False)
>>> reg.fit([[-1, 1], [0, 0], [1, 1]], [-1, 0, -1])
LassoLars(alpha=0.01, normalize=False)
>>> print(reg.coef_)
[ 0. -0.955...]
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.linear_model.LassoLars.fit "sklearn.linear_model.LassoLars.fit")(X, y[, Xy]) | Fit the model using X, y as training data. |
| [`get_params`](#sklearn.linear_model.LassoLars.get_params "sklearn.linear_model.LassoLars.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.linear_model.LassoLars.predict "sklearn.linear_model.LassoLars.predict")(X) | Predict using the linear model. |
| [`score`](#sklearn.linear_model.LassoLars.score "sklearn.linear_model.LassoLars.score")(X, y[, sample\_weight]) | Return the coefficient of determination of the prediction. |
| [`set_params`](#sklearn.linear_model.LassoLars.set_params "sklearn.linear_model.LassoLars.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y*, *Xy=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_least_angle.py#L1088)
Fit the model using X, y as training data.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training data.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_targets)
Target values.
**Xy**array-like of shape (n\_samples,) or (n\_samples, n\_targets), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
Returns:
**self**object
Returns an instance of self.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L372)
Predict using the linear model.
Parameters:
**X**array-like or sparse matrix, shape (n\_samples, n\_features)
Samples.
Returns:
**C**array, shape (n\_samples,)
Returns predicted values.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L677)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares `((y_true - y_pred)** 2).sum()` and \(v\) is the total sum of squares `((y_true - y_true.mean()) ** 2).sum()`. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of `y`, disregarding the input features, would get a \(R^2\) score of 0.0.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape `(n_samples, n_samples_fitted)`, where `n_samples_fitted` is the number of samples used in the fitting for the estimator.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True values for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
\(R^2\) of `self.predict(X)` wrt. `y`.
#### Notes
The \(R^2\) score used when calling `score` on a regressor uses `multioutput='uniform_average'` from version 0.23 to keep consistent with default value of [`r2_score`](sklearn.metrics.r2_score#sklearn.metrics.r2_score "sklearn.metrics.r2_score"). This influences the `score` method of all the multioutput regressors (except for [`MultiOutputRegressor`](sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor")).
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
| programming_docs |
scikit_learn sklearn.multiclass.OneVsRestClassifier sklearn.multiclass.OneVsRestClassifier
======================================
*class*sklearn.multiclass.OneVsRestClassifier(*estimator*, *\**, *n\_jobs=None*, *verbose=0*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/multiclass.py#L187)
One-vs-the-rest (OvR) multiclass strategy.
Also known as one-vs-all, this strategy consists in fitting one classifier per class. For each classifier, the class is fitted against all the other classes. In addition to its computational efficiency (only `n_classes` classifiers are needed), one advantage of this approach is its interpretability. Since each class is represented by one and one classifier only, it is possible to gain knowledge about the class by inspecting its corresponding classifier. This is the most commonly used strategy for multiclass classification and is a fair default choice.
OneVsRestClassifier can also be used for multilabel classification. To use this feature, provide an indicator matrix for the target `y` when calling `.fit`. In other words, the target labels should be formatted as a 2D binary (0/1) matrix, where [i, j] == 1 indicates the presence of label j in sample i. This estimator uses the binary relevance method to perform multilabel classification, which involves training one binary classifier independently for each label.
Read more in the [User Guide](../multiclass#ovr-classification).
Parameters:
**estimator**estimator object
An estimator object implementing [fit](https://scikit-learn.org/1.1/glossary.html#term-fit) and one of [decision\_function](https://scikit-learn.org/1.1/glossary.html#term-decision_function) or [predict\_proba](https://scikit-learn.org/1.1/glossary.html#term-predict_proba).
**n\_jobs**int, default=None
The number of jobs to use for the computation: the `n_classes` one-vs-rest problems are computed in parallel.
`None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
Changed in version 0.20: `n_jobs` default changed from 1 to None
**verbose**int, default=0
The verbosity level, if non zero, progress messages are printed. Below 50, the output is sent to stderr. Otherwise, the output is sent to stdout. The frequency of the messages increases with the verbosity level, reporting all iterations at 10. See [`joblib.Parallel`](https://joblib.readthedocs.io/en/latest/generated/joblib.Parallel.html#joblib.Parallel "(in joblib v1.3.0.dev0)") for more details.
New in version 1.1.
Attributes:
**estimators\_**list of `n_classes` estimators
Estimators used for predictions.
**classes\_**array, shape = [`n_classes`]
Class labels.
[`n_classes_`](#sklearn.multiclass.OneVsRestClassifier.n_classes_ "sklearn.multiclass.OneVsRestClassifier.n_classes_")int
Number of classes.
**label\_binarizer\_**LabelBinarizer object
Object used to transform multiclass labels to binary labels and vice-versa.
[`multilabel_`](#sklearn.multiclass.OneVsRestClassifier.multilabel_ "sklearn.multiclass.OneVsRestClassifier.multilabel_")boolean
Whether this is a multilabel classifier.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Only defined if the underlying estimator exposes such an attribute when fit.
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Only defined if the underlying estimator exposes such an attribute when fit.
New in version 1.0.
See also
`MultiOutputClassifier`
Alternate way of extending an estimator for multilabel classification.
[`sklearn.preprocessing.MultiLabelBinarizer`](sklearn.preprocessing.multilabelbinarizer#sklearn.preprocessing.MultiLabelBinarizer "sklearn.preprocessing.MultiLabelBinarizer")
Transform iterable of iterables to binary indicator matrix.
#### Examples
```
>>> import numpy as np
>>> from sklearn.multiclass import OneVsRestClassifier
>>> from sklearn.svm import SVC
>>> X = np.array([
... [10, 10],
... [8, 10],
... [-5, 5.5],
... [-5.4, 5.5],
... [-20, -20],
... [-15, -20]
... ])
>>> y = np.array([0, 0, 1, 1, 2, 2])
>>> clf = OneVsRestClassifier(SVC()).fit(X, y)
>>> clf.predict([[-19, -20], [9, 9], [-5, 5]])
array([2, 0, 1])
```
#### Methods
| | |
| --- | --- |
| [`decision_function`](#sklearn.multiclass.OneVsRestClassifier.decision_function "sklearn.multiclass.OneVsRestClassifier.decision_function")(X) | Decision function for the OneVsRestClassifier. |
| [`fit`](#sklearn.multiclass.OneVsRestClassifier.fit "sklearn.multiclass.OneVsRestClassifier.fit")(X, y) | Fit underlying estimators. |
| [`get_params`](#sklearn.multiclass.OneVsRestClassifier.get_params "sklearn.multiclass.OneVsRestClassifier.get_params")([deep]) | Get parameters for this estimator. |
| [`partial_fit`](#sklearn.multiclass.OneVsRestClassifier.partial_fit "sklearn.multiclass.OneVsRestClassifier.partial_fit")(X, y[, classes]) | Partially fit underlying estimators. |
| [`predict`](#sklearn.multiclass.OneVsRestClassifier.predict "sklearn.multiclass.OneVsRestClassifier.predict")(X) | Predict multi-class targets using underlying estimators. |
| [`predict_proba`](#sklearn.multiclass.OneVsRestClassifier.predict_proba "sklearn.multiclass.OneVsRestClassifier.predict_proba")(X) | Probability estimates. |
| [`score`](#sklearn.multiclass.OneVsRestClassifier.score "sklearn.multiclass.OneVsRestClassifier.score")(X, y[, sample\_weight]) | Return the mean accuracy on the given test data and labels. |
| [`set_params`](#sklearn.multiclass.OneVsRestClassifier.set_params "sklearn.multiclass.OneVsRestClassifier.set_params")(\*\*params) | Set the parameters of this estimator. |
decision\_function(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/multiclass.py#L490)
Decision function for the OneVsRestClassifier.
Return the distance of each sample from the decision boundary for each class. This can only be used with estimators which implement the `decision_function` method.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input data.
Returns:
**T**array-like of shape (n\_samples, n\_classes) or (n\_samples,) for binary classification.
Result of calling `decision_function` on the final estimator.
Changed in version 0.19: output shape changed to `(n_samples,)` to conform to scikit-learn conventions for binary classification.
fit(*X*, *y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/multiclass.py#L298)
Fit underlying estimators.
Parameters:
**X**(sparse) array-like of shape (n\_samples, n\_features)
Data.
**y**(sparse) array-like of shape (n\_samples,) or (n\_samples, n\_classes)
Multi-class targets. An indicator matrix turns on multilabel classification.
Returns:
**self**object
Instance of fitted estimator.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
*property*multilabel\_
Whether this is a multilabel classifier.
*property*n\_classes\_
Number of classes.
partial\_fit(*X*, *y*, *classes=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/multiclass.py#L347)
Partially fit underlying estimators.
Should be used when memory is inefficient to train all data. Chunks of data can be passed in several iteration.
Parameters:
**X**(sparse) array-like of shape (n\_samples, n\_features)
Data.
**y**(sparse) array-like of shape (n\_samples,) or (n\_samples, n\_classes)
Multi-class targets. An indicator matrix turns on multilabel classification.
**classes**array, shape (n\_classes, )
Classes across all calls to partial\_fit. Can be obtained via `np.unique(y_all)`, where y\_all is the target vector of the entire dataset. This argument is only required in the first call of partial\_fit and can be omitted in the subsequent calls.
Returns:
**self**object
Instance of partially fitted estimator.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/multiclass.py#L412)
Predict multi-class targets using underlying estimators.
Parameters:
**X**(sparse) array-like of shape (n\_samples, n\_features)
Data.
Returns:
**y**(sparse) array-like of shape (n\_samples,) or (n\_samples, n\_classes)
Predicted multi-class targets.
predict\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/multiclass.py#L450)
Probability estimates.
The returned estimates for all classes are ordered by label of classes.
Note that in the multilabel case, each sample can have any number of labels. This returns the marginal probability that the given sample has the label in question. For example, it is entirely consistent that two labels both have a 90% probability of applying to a given sample.
In the single label multiclass case, the rows of the returned matrix sum to 1.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input data.
Returns:
**T**(sparse) array-like of shape (n\_samples, n\_classes)
Returns the probability of the sample for each class in the model, where classes are ordered as they are in `self.classes_`.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L640)
Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True labels for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
Mean accuracy of `self.predict(X)` wrt. `y`.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.multiclass.OneVsRestClassifier`
-------------------------------------------------------
[Multilabel classification](../../auto_examples/miscellaneous/plot_multilabel#sphx-glr-auto-examples-miscellaneous-plot-multilabel-py)
[Precision-Recall](../../auto_examples/model_selection/plot_precision_recall#sphx-glr-auto-examples-model-selection-plot-precision-recall-py)
[Receiver Operating Characteristic (ROC)](../../auto_examples/model_selection/plot_roc#sphx-glr-auto-examples-model-selection-plot-roc-py)
[Classifier Chain](../../auto_examples/multioutput/plot_classifier_chain_yeast#sphx-glr-auto-examples-multioutput-plot-classifier-chain-yeast-py)
scikit_learn sklearn.metrics.label_ranking_average_precision_score sklearn.metrics.label\_ranking\_average\_precision\_score
=========================================================
sklearn.metrics.label\_ranking\_average\_precision\_score(*y\_true*, *y\_score*, *\**, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/_ranking.py#L1029)
Compute ranking-based average precision.
Label ranking average precision (LRAP) is the average over each ground truth label assigned to each sample, of the ratio of true vs. total labels with lower score.
This metric is used in multilabel ranking problem, where the goal is to give better rank to the labels associated to each sample.
The obtained score is always strictly greater than 0 and the best value is 1.
Read more in the [User Guide](../model_evaluation#label-ranking-average-precision).
Parameters:
**y\_true**{ndarray, sparse matrix} of shape (n\_samples, n\_labels)
True binary labels in binary indicator format.
**y\_score**ndarray of shape (n\_samples, n\_labels)
Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by “decision\_function” on some classifiers).
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
New in version 0.20.
Returns:
**score**float
Ranking-based average precision score.
#### Examples
```
>>> import numpy as np
>>> from sklearn.metrics import label_ranking_average_precision_score
>>> y_true = np.array([[1, 0, 0], [0, 0, 1]])
>>> y_score = np.array([[0.75, 0.5, 1], [1, 0.2, 0.1]])
>>> label_ranking_average_precision_score(y_true, y_score)
0.416...
```
scikit_learn sklearn.utils.metaestimators.if_delegate_has_method sklearn.utils.metaestimators.if\_delegate\_has\_method
======================================================
sklearn.utils.metaestimators.if\_delegate\_has\_method(*delegate*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/utils/metaestimators.py#L224)
Create a decorator for methods that are delegated to a sub-estimator
This enables ducktyping by hasattr returning True according to the sub-estimator.
Deprecated since version 1.3: `if_delegate_has_method` is deprecated in version 1.1 and will be removed in version 1.3. Use `available_if` instead.
Parameters:
**delegate**str, list of str or tuple of str
Name of the sub-estimator that can be accessed as an attribute of the base object. If a list or a tuple of names are provided, the first sub-estimator that is an attribute of the base object will be used.
scikit_learn sklearn.metrics.silhouette_samples sklearn.metrics.silhouette\_samples
===================================
sklearn.metrics.silhouette\_samples(*X*, *labels*, *\**, *metric='euclidean'*, *\*\*kwds*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/cluster/_unsupervised.py#L152)
Compute the Silhouette Coefficient for each sample.
The Silhouette Coefficient is a measure of how well samples are clustered with samples that are similar to themselves. Clustering models with a high Silhouette Coefficient are said to be dense, where samples in the same cluster are similar to each other, and well separated, where samples in different clusters are not very similar to each other.
The Silhouette Coefficient is calculated using the mean intra-cluster distance (`a`) and the mean nearest-cluster distance (`b`) for each sample. The Silhouette Coefficient for a sample is `(b - a) / max(a,
b)`. Note that Silhouette Coefficient is only defined if number of labels is 2 `<= n_labels <= n_samples - 1`.
This function returns the Silhouette Coefficient for each sample.
The best value is 1 and the worst value is -1. Values near 0 indicate overlapping clusters.
Read more in the [User Guide](../clustering#silhouette-coefficient).
Parameters:
**X**array-like of shape (n\_samples\_a, n\_samples\_a) if metric == “precomputed” or (n\_samples\_a, n\_features) otherwise
An array of pairwise distances between samples, or a feature array.
**labels**array-like of shape (n\_samples,)
Label values for each sample.
**metric**str or callable, default=’euclidean’
The metric to use when calculating distance between instances in a feature array. If metric is a string, it must be one of the options allowed by `sklearn.metrics.pairwise.pairwise_distances`. If `X` is the distance array itself, use “precomputed” as the metric. Precomputed distance matrices must have 0 along the diagonal.
**\*\*kwds**optional keyword parameters
Any further parameters are passed directly to the distance function. If using a `scipy.spatial.distance` metric, the parameters are still metric dependent. See the scipy docs for usage examples.
Returns:
**silhouette**array-like of shape (n\_samples,)
Silhouette Coefficients for each sample.
#### References
[1] [Peter J. Rousseeuw (1987). “Silhouettes: a Graphical Aid to the Interpretation and Validation of Cluster Analysis”. Computational and Applied Mathematics 20: 53-65.](https://www.sciencedirect.com/science/article/pii/0377042787901257)
[2] [Wikipedia entry on the Silhouette Coefficient](https://en.wikipedia.org/wiki/Silhouette_(clustering))
Examples using `sklearn.metrics.silhouette_samples`
---------------------------------------------------
[Selecting the number of clusters with silhouette analysis on KMeans clustering](../../auto_examples/cluster/plot_kmeans_silhouette_analysis#sphx-glr-auto-examples-cluster-plot-kmeans-silhouette-analysis-py)
scikit_learn sklearn.utils.check_array sklearn.utils.check\_array
==========================
sklearn.utils.check\_array(*array*, *accept\_sparse=False*, *\**, *accept\_large\_sparse=True*, *dtype='numeric'*, *order=None*, *copy=False*, *force\_all\_finite=True*, *ensure\_2d=True*, *allow\_nd=False*, *ensure\_min\_samples=1*, *ensure\_min\_features=1*, *estimator=None*, *input\_name=''*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/utils/validation.py#L619)
Input validation on an array, list, sparse matrix or similar.
By default, the input is checked to be a non-empty 2D array containing only finite values. If the dtype of the array is object, attempt converting to float, raising on failure.
Parameters:
**array**object
Input object to check / convert.
**accept\_sparse**str, bool or list/tuple of str, default=False
String[s] representing allowed sparse matrix formats, such as ‘csc’, ‘csr’, etc. If the input is sparse but not in the allowed format, it will be converted to the first listed format. True allows the input to be any format. False means that a sparse matrix input will raise an error.
**accept\_large\_sparse**bool, default=True
If a CSR, CSC, COO or BSR sparse matrix is supplied and accepted by accept\_sparse, accept\_large\_sparse=False will cause it to be accepted only if its indices are stored with a 32-bit dtype.
New in version 0.20.
**dtype**‘numeric’, type, list of type or None, default=’numeric’
Data type of result. If None, the dtype of the input is preserved. If “numeric”, dtype is preserved unless array.dtype is object. If dtype is a list of types, conversion on the first type is only performed if the dtype of the input is not in the list.
**order**{‘F’, ‘C’} or None, default=None
Whether an array will be forced to be fortran or c-style. When order is None (default), then if copy=False, nothing is ensured about the memory layout of the output array; otherwise (copy=True) the memory layout of the returned array is kept as close as possible to the original array.
**copy**bool, default=False
Whether a forced copy will be triggered. If copy=False, a copy might be triggered by a conversion.
**force\_all\_finite**bool or ‘allow-nan’, default=True
Whether to raise an error on np.inf, np.nan, pd.NA in array. The possibilities are:
* True: Force all values of array to be finite.
* False: accepts np.inf, np.nan, pd.NA in array.
* ‘allow-nan’: accepts only np.nan and pd.NA values in array. Values cannot be infinite.
New in version 0.20: `force_all_finite` accepts the string `'allow-nan'`.
Changed in version 0.23: Accepts `pd.NA` and converts it into `np.nan`
**ensure\_2d**bool, default=True
Whether to raise a value error if array is not 2D.
**allow\_nd**bool, default=False
Whether to allow array.ndim > 2.
**ensure\_min\_samples**int, default=1
Make sure that the array has a minimum number of samples in its first axis (rows for a 2D array). Setting to 0 disables this check.
**ensure\_min\_features**int, default=1
Make sure that the 2D array has some minimum number of features (columns). The default value of 1 rejects empty datasets. This check is only enforced when the input data has effectively 2 dimensions or is originally 1D and `ensure_2d` is True. Setting to 0 disables this check.
**estimator**str or estimator instance, default=None
If passed, include the name of the estimator in warning messages.
**input\_name**str, default=””
The data name used to construct the error message. In particular if `input_name` is “X” and the data has NaN values and allow\_nan is False, the error message will link to the imputer documentation.
New in version 1.1.0.
Returns:
**array\_converted**object
The converted and validated array.
| programming_docs |
scikit_learn sklearn.utils.multiclass.unique_labels sklearn.utils.multiclass.unique\_labels
=======================================
sklearn.utils.multiclass.unique\_labels(*\*ys*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/utils/multiclass.py#L42)
Extract an ordered array of unique labels.
We don’t allow:
* mix of multilabel and multiclass (single label) targets
* mix of label indicator matrix and anything else, because there are no explicit labels)
* mix of label indicator matrices of different sizes
* mix of string and integer labels
At the moment, we also don’t allow “multiclass-multioutput” input type.
Parameters:
**\*ys**array-likes
Returns:
**out**ndarray of shape (n\_unique\_labels,)
An ordered array of unique labels.
#### Examples
```
>>> from sklearn.utils.multiclass import unique_labels
>>> unique_labels([3, 5, 5, 5, 7, 7])
array([3, 5, 7])
>>> unique_labels([1, 2, 3, 4], [2, 2, 3, 4])
array([1, 2, 3, 4])
>>> unique_labels([1, 2, 10], [5, 11])
array([ 1, 2, 5, 10, 11])
```
scikit_learn sklearn.semi_supervised.LabelPropagation sklearn.semi\_supervised.LabelPropagation
=========================================
*class*sklearn.semi\_supervised.LabelPropagation(*kernel='rbf'*, *\**, *gamma=20*, *n\_neighbors=7*, *max\_iter=1000*, *tol=0.001*, *n\_jobs=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/semi_supervised/_label_propagation.py#L332)
Label Propagation classifier.
Read more in the [User Guide](../semi_supervised#label-propagation).
Parameters:
**kernel**{‘knn’, ‘rbf’} or callable, default=’rbf’
String identifier for kernel function to use or the kernel function itself. Only ‘rbf’ and ‘knn’ strings are valid inputs. The function passed should take two inputs, each of shape (n\_samples, n\_features), and return a (n\_samples, n\_samples) shaped weight matrix.
**gamma**float, default=20
Parameter for rbf kernel.
**n\_neighbors**int, default=7
Parameter for knn kernel which need to be strictly positive.
**max\_iter**int, default=1000
Change maximum number of iterations allowed.
**tol**float, 1e-3
Convergence tolerance: threshold to consider the system at steady state.
**n\_jobs**int, default=None
The number of parallel jobs to run. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
Attributes:
**X\_**ndarray of shape (n\_samples, n\_features)
Input array.
**classes\_**ndarray of shape (n\_classes,)
The distinct labels used in classifying instances.
**label\_distributions\_**ndarray of shape (n\_samples, n\_classes)
Categorical distribution for each item.
**transduction\_**ndarray of shape (n\_samples)
Label assigned to each item via the transduction.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
**n\_iter\_**int
Number of iterations run.
See also
`BaseLabelPropagation`
Base class for label propagation module.
[`LabelSpreading`](sklearn.semi_supervised.labelspreading#sklearn.semi_supervised.LabelSpreading "sklearn.semi_supervised.LabelSpreading")
Alternate label propagation strategy more robust to noise.
#### References
Xiaojin Zhu and Zoubin Ghahramani. Learning from labeled and unlabeled data with label propagation. Technical Report CMU-CALD-02-107, Carnegie Mellon University, 2002 <http://pages.cs.wisc.edu/~jerryzhu/pub/CMU-CALD-02-107.pdf>
#### Examples
```
>>> import numpy as np
>>> from sklearn import datasets
>>> from sklearn.semi_supervised import LabelPropagation
>>> label_prop_model = LabelPropagation()
>>> iris = datasets.load_iris()
>>> rng = np.random.RandomState(42)
>>> random_unlabeled_points = rng.rand(len(iris.target)) < 0.3
>>> labels = np.copy(iris.target)
>>> labels[random_unlabeled_points] = -1
>>> label_prop_model.fit(iris.data, labels)
LabelPropagation(...)
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.semi_supervised.LabelPropagation.fit "sklearn.semi_supervised.LabelPropagation.fit")(X, y) | Fit a semi-supervised label propagation model to X. |
| [`get_params`](#sklearn.semi_supervised.LabelPropagation.get_params "sklearn.semi_supervised.LabelPropagation.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.semi_supervised.LabelPropagation.predict "sklearn.semi_supervised.LabelPropagation.predict")(X) | Perform inductive inference across the model. |
| [`predict_proba`](#sklearn.semi_supervised.LabelPropagation.predict_proba "sklearn.semi_supervised.LabelPropagation.predict_proba")(X) | Predict probability for each possible outcome. |
| [`score`](#sklearn.semi_supervised.LabelPropagation.score "sklearn.semi_supervised.LabelPropagation.score")(X, y[, sample\_weight]) | Return the mean accuracy on the given test data and labels. |
| [`set_params`](#sklearn.semi_supervised.LabelPropagation.set_params "sklearn.semi_supervised.LabelPropagation.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/semi_supervised/_label_propagation.py#L456)
Fit a semi-supervised label propagation model to X.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training data, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**array-like of shape (n\_samples,)
Target class values with unlabeled points marked as -1. All unlabeled samples will be transductively assigned labels internally.
Returns:
**self**object
Returns the instance itself.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/semi_supervised/_label_propagation.py#L169)
Perform inductive inference across the model.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The data matrix.
Returns:
**y**ndarray of shape (n\_samples,)
Predictions for input data.
predict\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/semi_supervised/_label_propagation.py#L185)
Predict probability for each possible outcome.
Compute the probability estimates for each single sample in X and each possible outcome seen during training (categorical distribution).
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The data matrix.
Returns:
**probabilities**ndarray of shape (n\_samples, n\_classes)
Normalized probability distributions across class labels.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L640)
Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True labels for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
Mean accuracy of `self.predict(X)` wrt. `y`.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
scikit_learn sklearn.model_selection.GroupKFold sklearn.model\_selection.GroupKFold
===================================
*class*sklearn.model\_selection.GroupKFold(*n\_splits=5*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_split.py#L453)
K-fold iterator variant with non-overlapping groups.
Each group will appear exactly once in the test set across all folds (the number of distinct groups has to be at least equal to the number of folds).
The folds are approximately balanced in the sense that the number of distinct groups is approximately the same in each fold.
Read more in the [User Guide](../cross_validation#group-k-fold).
Parameters:
**n\_splits**int, default=5
Number of folds. Must be at least 2.
Changed in version 0.22: `n_splits` default value changed from 3 to 5.
See also
[`LeaveOneGroupOut`](sklearn.model_selection.leaveonegroupout#sklearn.model_selection.LeaveOneGroupOut "sklearn.model_selection.LeaveOneGroupOut")
For splitting the data according to explicit domain-specific stratification of the dataset.
[`StratifiedKFold`](sklearn.model_selection.stratifiedkfold#sklearn.model_selection.StratifiedKFold "sklearn.model_selection.StratifiedKFold")
Takes class information into account to avoid building folds with imbalanced class proportions (for binary or multiclass classification tasks).
#### Notes
Groups appear in an arbitrary order throughout the folds.
#### Examples
```
>>> import numpy as np
>>> from sklearn.model_selection import GroupKFold
>>> X = np.array([[1, 2], [3, 4], [5, 6], [7, 8]])
>>> y = np.array([1, 2, 3, 4])
>>> groups = np.array([0, 0, 2, 2])
>>> group_kfold = GroupKFold(n_splits=2)
>>> group_kfold.get_n_splits(X, y, groups)
2
>>> print(group_kfold)
GroupKFold(n_splits=2)
>>> for train_index, test_index in group_kfold.split(X, y, groups):
... print("TRAIN:", train_index, "TEST:", test_index)
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
... print(X_train, X_test, y_train, y_test)
...
TRAIN: [0 1] TEST: [2 3]
[[1 2]
[3 4]] [[5 6]
[7 8]] [1 2] [3 4]
TRAIN: [2 3] TEST: [0 1]
[[5 6]
[7 8]] [[1 2]
[3 4]] [3 4] [1 2]
```
#### Methods
| | |
| --- | --- |
| [`get_n_splits`](#sklearn.model_selection.GroupKFold.get_n_splits "sklearn.model_selection.GroupKFold.get_n_splits")([X, y, groups]) | Returns the number of splitting iterations in the cross-validator |
| [`split`](#sklearn.model_selection.GroupKFold.split "sklearn.model_selection.GroupKFold.split")(X[, y, groups]) | Generate indices to split data into training and test set. |
get\_n\_splits(*X=None*, *y=None*, *groups=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_split.py#L343)
Returns the number of splitting iterations in the cross-validator
Parameters:
**X**object
Always ignored, exists for compatibility.
**y**object
Always ignored, exists for compatibility.
**groups**object
Always ignored, exists for compatibility.
Returns:
**n\_splits**int
Returns the number of splitting iterations in the cross-validator.
split(*X*, *y=None*, *groups=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_split.py#L554)
Generate indices to split data into training and test set.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training data, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**array-like of shape (n\_samples,), default=None
The target variable for supervised learning problems.
**groups**array-like of shape (n\_samples,)
Group labels for the samples used while splitting the dataset into train/test set.
Yields:
**train**ndarray
The training set indices for that split.
**test**ndarray
The testing set indices for that split.
Examples using `sklearn.model_selection.GroupKFold`
---------------------------------------------------
[Visualizing cross-validation behavior in scikit-learn](../../auto_examples/model_selection/plot_cv_indices#sphx-glr-auto-examples-model-selection-plot-cv-indices-py)
scikit_learn sklearn.utils.multiclass.type_of_target sklearn.utils.multiclass.type\_of\_target
=========================================
sklearn.utils.multiclass.type\_of\_target(*y*, *input\_name=''*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/utils/multiclass.py#L203)
Determine the type of data indicated by the target.
Note that this type is the most specific type that can be inferred. For example:
* `binary` is more specific but compatible with `multiclass`.
* `multiclass` of integers is more specific but compatible with `continuous`.
* `multilabel-indicator` is more specific but compatible with `multiclass-multioutput`.
Parameters:
**y**array-like
**input\_name**str, default=””
The data name used to construct the error message.
New in version 1.1.0.
Returns:
**target\_type**str
One of:
* ‘continuous’: `y` is an array-like of floats that are not all integers, and is 1d or a column vector.
* ‘continuous-multioutput’: `y` is a 2d array of floats that are not all integers, and both dimensions are of size > 1.
* ‘binary’: `y` contains <= 2 discrete values and is 1d or a column vector.
* ‘multiclass’: `y` contains more than two discrete values, is not a sequence of sequences, and is 1d or a column vector.
* ‘multiclass-multioutput’: `y` is a 2d array that contains more than two discrete values, is not a sequence of sequences, and both dimensions are of size > 1.
* ‘multilabel-indicator’: `y` is a label indicator matrix, an array of two dimensions with at least two columns, and at most 2 unique values.
* ‘unknown’: `y` is array-like but none of the above, such as a 3d array, sequence of sequences, or an array of non-sequence objects.
#### Examples
```
>>> from sklearn.utils.multiclass import type_of_target
>>> import numpy as np
>>> type_of_target([0.1, 0.6])
'continuous'
>>> type_of_target([1, -1, -1, 1])
'binary'
>>> type_of_target(['a', 'b', 'a'])
'binary'
>>> type_of_target([1.0, 2.0])
'binary'
>>> type_of_target([1, 0, 2])
'multiclass'
>>> type_of_target([1.0, 0.0, 3.0])
'multiclass'
>>> type_of_target(['a', 'b', 'c'])
'multiclass'
>>> type_of_target(np.array([[1, 2], [3, 1]]))
'multiclass-multioutput'
>>> type_of_target([[1, 2]])
'multilabel-indicator'
>>> type_of_target(np.array([[1.5, 2.0], [3.0, 1.6]]))
'continuous-multioutput'
>>> type_of_target(np.array([[0, 1], [1, 1]]))
'multilabel-indicator'
```
scikit_learn sklearn.cluster.kmeans_plusplus sklearn.cluster.kmeans\_plusplus
================================
sklearn.cluster.kmeans\_plusplus(*X*, *n\_clusters*, *\**, *x\_squared\_norms=None*, *random\_state=None*, *n\_local\_trials=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_kmeans.py#L58)
Init n\_clusters seeds according to k-means++.
New in version 0.24.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The data to pick seeds from.
**n\_clusters**int
The number of centroids to initialize.
**x\_squared\_norms**array-like of shape (n\_samples,), default=None
Squared Euclidean norm of each data point.
**random\_state**int or RandomState instance, default=None
Determines random number generation for centroid initialization. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**n\_local\_trials**int, default=None
The number of seeding trials for each center (except the first), of which the one reducing inertia the most is greedily chosen. Set to None to make the number of trials depend logarithmically on the number of seeds (2+log(k)).
Returns:
**centers**ndarray of shape (n\_clusters, n\_features)
The initial centers for k-means.
**indices**ndarray of shape (n\_clusters,)
The index location of the chosen centers in the data array X. For a given index and center, X[index] = center.
#### Notes
Selects initial cluster centers for k-mean clustering in a smart way to speed up convergence. see: Arthur, D. and Vassilvitskii, S. “k-means++: the advantages of careful seeding”. ACM-SIAM symposium on Discrete algorithms. 2007
#### Examples
```
>>> from sklearn.cluster import kmeans_plusplus
>>> import numpy as np
>>> X = np.array([[1, 2], [1, 4], [1, 0],
... [10, 2], [10, 4], [10, 0]])
>>> centers, indices = kmeans_plusplus(X, n_clusters=2, random_state=0)
>>> centers
array([[10, 4],
[ 1, 0]])
>>> indices
array([4, 2])
```
Examples using `sklearn.cluster.kmeans_plusplus`
------------------------------------------------
[An example of K-Means++ initialization](../../auto_examples/cluster/plot_kmeans_plusplus#sphx-glr-auto-examples-cluster-plot-kmeans-plusplus-py)
scikit_learn sklearn.utils.safe_sqr sklearn.utils.safe\_sqr
=======================
sklearn.utils.safe\_sqr(*X*, *\**, *copy=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/utils/__init__.py#L656)
Element wise squaring of array-likes and sparse matrices.
Parameters:
**X**{array-like, ndarray, sparse matrix}
**copy**bool, default=True
Whether to create a copy of X and operate on it or to perform inplace computation (default behaviour).
Returns:
**X \*\* 2**element wise square
scikit_learn sklearn.cluster.DBSCAN sklearn.cluster.DBSCAN
======================
*class*sklearn.cluster.DBSCAN(*eps=0.5*, *\**, *min\_samples=5*, *metric='euclidean'*, *metric\_params=None*, *algorithm='auto'*, *leaf\_size=30*, *p=None*, *n\_jobs=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_dbscan.py#L165)
Perform DBSCAN clustering from vector array or distance matrix.
DBSCAN - Density-Based Spatial Clustering of Applications with Noise. Finds core samples of high density and expands clusters from them. Good for data which contains clusters of similar density.
Read more in the [User Guide](../clustering#dbscan).
Parameters:
**eps**float, default=0.5
The maximum distance between two samples for one to be considered as in the neighborhood of the other. This is not a maximum bound on the distances of points within a cluster. This is the most important DBSCAN parameter to choose appropriately for your data set and distance function.
**min\_samples**int, default=5
The number of samples (or total weight) in a neighborhood for a point to be considered as a core point. This includes the point itself.
**metric**str, or callable, default=’euclidean’
The metric to use when calculating distance between instances in a feature array. If metric is a string or callable, it must be one of the options allowed by [`sklearn.metrics.pairwise_distances`](sklearn.metrics.pairwise_distances#sklearn.metrics.pairwise_distances "sklearn.metrics.pairwise_distances") for its metric parameter. If metric is “precomputed”, X is assumed to be a distance matrix and must be square. X may be a [sparse graph](https://scikit-learn.org/1.1/glossary.html#term-sparse-graph), in which case only “nonzero” elements may be considered neighbors for DBSCAN.
New in version 0.17: metric *precomputed* to accept precomputed sparse matrix.
**metric\_params**dict, default=None
Additional keyword arguments for the metric function.
New in version 0.19.
**algorithm**{‘auto’, ‘ball\_tree’, ‘kd\_tree’, ‘brute’}, default=’auto’
The algorithm to be used by the NearestNeighbors module to compute pointwise distances and find nearest neighbors. See NearestNeighbors module documentation for details.
**leaf\_size**int, default=30
Leaf size passed to BallTree or cKDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem.
**p**float, default=None
The power of the Minkowski metric to be used to calculate distance between points. If None, then `p=2` (equivalent to the Euclidean distance).
**n\_jobs**int, default=None
The number of parallel jobs to run. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
Attributes:
**core\_sample\_indices\_**ndarray of shape (n\_core\_samples,)
Indices of core samples.
**components\_**ndarray of shape (n\_core\_samples, n\_features)
Copy of each core sample found by training.
**labels\_**ndarray of shape (n\_samples)
Cluster labels for each point in the dataset given to fit(). Noisy samples are given the label -1.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`OPTICS`](sklearn.cluster.optics#sklearn.cluster.OPTICS "sklearn.cluster.OPTICS")
A similar clustering at multiple values of eps. Our implementation is optimized for memory usage.
#### Notes
For an example, see [examples/cluster/plot\_dbscan.py](../../auto_examples/cluster/plot_dbscan#sphx-glr-auto-examples-cluster-plot-dbscan-py).
This implementation bulk-computes all neighborhood queries, which increases the memory complexity to O(n.d) where d is the average number of neighbors, while original DBSCAN had memory complexity O(n). It may attract a higher memory complexity when querying these nearest neighborhoods, depending on the `algorithm`.
One way to avoid the query complexity is to pre-compute sparse neighborhoods in chunks using [`NearestNeighbors.radius_neighbors_graph`](sklearn.neighbors.nearestneighbors#sklearn.neighbors.NearestNeighbors.radius_neighbors_graph "sklearn.neighbors.NearestNeighbors.radius_neighbors_graph") with `mode='distance'`, then using `metric='precomputed'` here.
Another way to reduce memory and computation time is to remove (near-)duplicate points and use `sample_weight` instead.
`cluster.OPTICS` provides a similar clustering with lower memory usage.
#### References
Ester, M., H. P. Kriegel, J. Sander, and X. Xu, “A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise”. In: Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining, Portland, OR, AAAI Press, pp. 226-231. 1996
Schubert, E., Sander, J., Ester, M., Kriegel, H. P., & Xu, X. (2017). DBSCAN revisited, revisited: why and how you should (still) use DBSCAN. ACM Transactions on Database Systems (TODS), 42(3), 19.
#### Examples
```
>>> from sklearn.cluster import DBSCAN
>>> import numpy as np
>>> X = np.array([[1, 2], [2, 2], [2, 3],
... [8, 7], [8, 8], [25, 80]])
>>> clustering = DBSCAN(eps=3, min_samples=2).fit(X)
>>> clustering.labels_
array([ 0, 0, 0, 1, 1, -1])
>>> clustering
DBSCAN(eps=3, min_samples=2)
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.cluster.DBSCAN.fit "sklearn.cluster.DBSCAN.fit")(X[, y, sample\_weight]) | Perform DBSCAN clustering from features, or distance matrix. |
| [`fit_predict`](#sklearn.cluster.DBSCAN.fit_predict "sklearn.cluster.DBSCAN.fit_predict")(X[, y, sample\_weight]) | Compute clusters from a data or distance matrix and predict labels. |
| [`get_params`](#sklearn.cluster.DBSCAN.get_params "sklearn.cluster.DBSCAN.get_params")([deep]) | Get parameters for this estimator. |
| [`set_params`](#sklearn.cluster.DBSCAN.set_params "sklearn.cluster.DBSCAN.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y=None*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_dbscan.py#L322)
Perform DBSCAN clustering from features, or distance matrix.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features), or (n\_samples, n\_samples)
Training instances to cluster, or distances between instances if `metric='precomputed'`. If a sparse matrix is provided, it will be converted into a sparse `csr_matrix`.
**y**Ignored
Not used, present here for API consistency by convention.
**sample\_weight**array-like of shape (n\_samples,), default=None
Weight of each sample, such that a sample with a weight of at least `min_samples` is by itself a core sample; a sample with a negative weight may inhibit its eps-neighbor from being core. Note that weights are absolute, and default to 1.
Returns:
**self**object
Returns a fitted instance of self.
fit\_predict(*X*, *y=None*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_dbscan.py#L433)
Compute clusters from a data or distance matrix and predict labels.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features), or (n\_samples, n\_samples)
Training instances to cluster, or distances between instances if `metric='precomputed'`. If a sparse matrix is provided, it will be converted into a sparse `csr_matrix`.
**y**Ignored
Not used, present here for API consistency by convention.
**sample\_weight**array-like of shape (n\_samples,), default=None
Weight of each sample, such that a sample with a weight of at least `min_samples` is by itself a core sample; a sample with a negative weight may inhibit its eps-neighbor from being core. Note that weights are absolute, and default to 1.
Returns:
**labels**ndarray of shape (n\_samples,)
Cluster labels. Noisy samples are given the label -1.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.cluster.DBSCAN`
---------------------------------------
[Comparing different clustering algorithms on toy datasets](../../auto_examples/cluster/plot_cluster_comparison#sphx-glr-auto-examples-cluster-plot-cluster-comparison-py)
[Demo of DBSCAN clustering algorithm](../../auto_examples/cluster/plot_dbscan#sphx-glr-auto-examples-cluster-plot-dbscan-py)
| programming_docs |
scikit_learn sklearn.feature_selection.mutual_info_regression sklearn.feature\_selection.mutual\_info\_regression
===================================================
sklearn.feature\_selection.mutual\_info\_regression(*X*, *y*, *\**, *discrete\_features='auto'*, *n\_neighbors=3*, *copy=True*, *random\_state=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_mutual_info.py#L313)
Estimate mutual information for a continuous target variable.
Mutual information (MI) [[1]](#r37d39d7589e2-1) between two random variables is a non-negative value, which measures the dependency between the variables. It is equal to zero if and only if two random variables are independent, and higher values mean higher dependency.
The function relies on nonparametric methods based on entropy estimation from k-nearest neighbors distances as described in [[2]](#r37d39d7589e2-2) and [[3]](#r37d39d7589e2-3). Both methods are based on the idea originally proposed in [[4]](#r37d39d7589e2-4).
It can be used for univariate features selection, read more in the [User Guide](../feature_selection#univariate-feature-selection).
Parameters:
**X**array-like or sparse matrix, shape (n\_samples, n\_features)
Feature matrix.
**y**array-like of shape (n\_samples,)
Target vector.
**discrete\_features**{‘auto’, bool, array-like}, default=’auto’
If bool, then determines whether to consider all features discrete or continuous. If array, then it should be either a boolean mask with shape (n\_features,) or array with indices of discrete features. If ‘auto’, it is assigned to False for dense `X` and to True for sparse `X`.
**n\_neighbors**int, default=3
Number of neighbors to use for MI estimation for continuous variables, see [[2]](#r37d39d7589e2-2) and [[3]](#r37d39d7589e2-3). Higher values reduce variance of the estimation, but could introduce a bias.
**copy**bool, default=True
Whether to make a copy of the given data. If set to False, the initial data will be overwritten.
**random\_state**int, RandomState instance or None, default=None
Determines random number generation for adding small noise to continuous variables in order to remove repeated values. Pass an int for reproducible results across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
Returns:
**mi**ndarray, shape (n\_features,)
Estimated mutual information between each feature and the target.
#### Notes
1. The term “discrete features” is used instead of naming them “categorical”, because it describes the essence more accurately. For example, pixel intensities of an image are discrete features (but hardly categorical) and you will get better results if mark them as such. Also note, that treating a continuous variable as discrete and vice versa will usually give incorrect results, so be attentive about that.
2. True mutual information can’t be negative. If its estimate turns out to be negative, it is replaced by zero.
#### References
[[1](#id1)] [Mutual Information](https://en.wikipedia.org/wiki/Mutual_information) on Wikipedia.
[2] ([1](#id2),[2](#id5)) A. Kraskov, H. Stogbauer and P. Grassberger, “Estimating mutual information”. Phys. Rev. E 69, 2004.
[3] ([1](#id3),[2](#id6)) B. C. Ross “Mutual Information between Discrete and Continuous Data Sets”. PLoS ONE 9(2), 2014.
[[4](#id4)] L. F. Kozachenko, N. N. Leonenko, “Sample Estimate of the Entropy of a Random Vector”, Probl. Peredachi Inf., 23:2 (1987), 9-16
Examples using `sklearn.feature_selection.mutual_info_regression`
-----------------------------------------------------------------
[Comparison of F-test and mutual information](../../auto_examples/feature_selection/plot_f_test_vs_mi#sphx-glr-auto-examples-feature-selection-plot-f-test-vs-mi-py)
scikit_learn sklearn.metrics.mean_absolute_error sklearn.metrics.mean\_absolute\_error
=====================================
sklearn.metrics.mean\_absolute\_error(*y\_true*, *y\_pred*, *\**, *sample\_weight=None*, *multioutput='uniform\_average'*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/_regression.py#L141)
Mean absolute error regression loss.
Read more in the [User Guide](../model_evaluation#mean-absolute-error).
Parameters:
**y\_true**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
Ground truth (correct) target values.
**y\_pred**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
Estimated target values.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
**multioutput**{‘raw\_values’, ‘uniform\_average’} or array-like of shape (n\_outputs,), default=’uniform\_average’
Defines aggregating of multiple output values. Array-like value defines weights used to average errors.
‘raw\_values’ :
Returns a full set of errors in case of multioutput input.
‘uniform\_average’ :
Errors of all outputs are averaged with uniform weight.
Returns:
**loss**float or ndarray of floats
If multioutput is ‘raw\_values’, then mean absolute error is returned for each output separately. If multioutput is ‘uniform\_average’ or an ndarray of weights, then the weighted average of all output errors is returned.
MAE output is non-negative floating point. The best value is 0.0.
#### Examples
```
>>> from sklearn.metrics import mean_absolute_error
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> mean_absolute_error(y_true, y_pred)
0.5
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> mean_absolute_error(y_true, y_pred)
0.75
>>> mean_absolute_error(y_true, y_pred, multioutput='raw_values')
array([0.5, 1. ])
>>> mean_absolute_error(y_true, y_pred, multioutput=[0.3, 0.7])
0.85...
```
Examples using `sklearn.metrics.mean_absolute_error`
----------------------------------------------------
[Poisson regression and non-normal loss](../../auto_examples/linear_model/plot_poisson_regression_non_normal_loss#sphx-glr-auto-examples-linear-model-plot-poisson-regression-non-normal-loss-py)
[Quantile regression](../../auto_examples/linear_model/plot_quantile_regression#sphx-glr-auto-examples-linear-model-plot-quantile-regression-py)
[Tweedie regression on insurance claims](../../auto_examples/linear_model/plot_tweedie_regression_insurance_claims#sphx-glr-auto-examples-linear-model-plot-tweedie-regression-insurance-claims-py)
scikit_learn sklearn.decomposition.NMF sklearn.decomposition.NMF
=========================
*class*sklearn.decomposition.NMF(*n\_components=None*, *\**, *init=None*, *solver='cd'*, *beta\_loss='frobenius'*, *tol=0.0001*, *max\_iter=200*, *random\_state=None*, *alpha='deprecated'*, *alpha\_W=0.0*, *alpha\_H='same'*, *l1\_ratio=0.0*, *verbose=0*, *shuffle=False*, *regularization='deprecated'*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/decomposition/_nmf.py#L1158)
Non-Negative Matrix Factorization (NMF).
Find two non-negative matrices, i.e. matrices with all non-negative elements, (W, H) whose product approximates the non-negative matrix X. This factorization can be used for example for dimensionality reduction, source separation or topic extraction.
The objective function is:
\[ \begin{align}\begin{aligned}L(W, H) &= 0.5 \* ||X - WH||\_{loss}^2\\&+ alpha\\_W \* l1\\_ratio \* n\\_features \* ||vec(W)||\_1\\&+ alpha\\_H \* l1\\_ratio \* n\\_samples \* ||vec(H)||\_1\\&+ 0.5 \* alpha\\_W \* (1 - l1\\_ratio) \* n\\_features \* ||W||\_{Fro}^2\\&+ 0.5 \* alpha\\_H \* (1 - l1\\_ratio) \* n\\_samples \* ||H||\_{Fro}^2\end{aligned}\end{align} \] Where:
\(||A||\_{Fro}^2 = \sum\_{i,j} A\_{ij}^2\) (Frobenius norm)
\(||vec(A)||\_1 = \sum\_{i,j} abs(A\_{ij})\) (Elementwise L1 norm)
The generic norm \(||X - WH||\_{loss}\) may represent the Frobenius norm or another supported beta-divergence loss. The choice between options is controlled by the `beta_loss` parameter.
The regularization terms are scaled by `n_features` for `W` and by `n_samples` for `H` to keep their impact balanced with respect to one another and to the data fit term as independent as possible of the size `n_samples` of the training set.
The objective function is minimized with an alternating minimization of W and H.
Note that the transformed data is named W and the components matrix is named H. In the NMF literature, the naming convention is usually the opposite since the data matrix X is transposed.
Read more in the [User Guide](../decomposition#nmf).
Parameters:
**n\_components**int, default=None
Number of components, if n\_components is not set all features are kept.
**init**{‘random’, ‘nndsvd’, ‘nndsvda’, ‘nndsvdar’, ‘custom’}, default=None
Method used to initialize the procedure. Default: None. Valid options:
* `None`: ‘nndsvda’ if n\_components <= min(n\_samples, n\_features), otherwise random.
* `'random'`: non-negative random matrices, scaled with: sqrt(X.mean() / n\_components)
* `'nndsvd'`: Nonnegative Double Singular Value Decomposition (NNDSVD) initialization (better for sparseness)
* `'nndsvda'`: NNDSVD with zeros filled with the average of X (better when sparsity is not desired)
* `'nndsvdar'` NNDSVD with zeros filled with small random values (generally faster, less accurate alternative to NNDSVDa for when sparsity is not desired)
* `'custom'`: use custom matrices W and H
Changed in version 1.1: When `init=None` and n\_components is less than n\_samples and n\_features defaults to `nndsvda` instead of `nndsvd`.
**solver**{‘cd’, ‘mu’}, default=’cd’
Numerical solver to use: ‘cd’ is a Coordinate Descent solver. ‘mu’ is a Multiplicative Update solver.
New in version 0.17: Coordinate Descent solver.
New in version 0.19: Multiplicative Update solver.
**beta\_loss**float or {‘frobenius’, ‘kullback-leibler’, ‘itakura-saito’}, default=’frobenius’
Beta divergence to be minimized, measuring the distance between X and the dot product WH. Note that values different from ‘frobenius’ (or 2) and ‘kullback-leibler’ (or 1) lead to significantly slower fits. Note that for beta\_loss <= 0 (or ‘itakura-saito’), the input matrix X cannot contain zeros. Used only in ‘mu’ solver.
New in version 0.19.
**tol**float, default=1e-4
Tolerance of the stopping condition.
**max\_iter**int, default=200
Maximum number of iterations before timing out.
**random\_state**int, RandomState instance or None, default=None
Used for initialisation (when `init` == ‘nndsvdar’ or ‘random’), and in Coordinate Descent. Pass an int for reproducible results across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**alpha**float, default=0.0
Constant that multiplies the regularization terms. Set it to zero to have no regularization. When using `alpha` instead of `alpha_W` and `alpha_H`, the regularization terms are not scaled by the `n_features` (resp. `n_samples`) factors for `W` (resp. `H`).
New in version 0.17: *alpha* used in the Coordinate Descent solver.
Deprecated since version 1.0: The `alpha` parameter is deprecated in 1.0 and will be removed in 1.2. Use `alpha_W` and `alpha_H` instead.
**alpha\_W**float, default=0.0
Constant that multiplies the regularization terms of `W`. Set it to zero (default) to have no regularization on `W`.
New in version 1.0.
**alpha\_H**float or “same”, default=”same”
Constant that multiplies the regularization terms of `H`. Set it to zero to have no regularization on `H`. If “same” (default), it takes the same value as `alpha_W`.
New in version 1.0.
**l1\_ratio**float, default=0.0
The regularization mixing parameter, with 0 <= l1\_ratio <= 1. For l1\_ratio = 0 the penalty is an elementwise L2 penalty (aka Frobenius Norm). For l1\_ratio = 1 it is an elementwise L1 penalty. For 0 < l1\_ratio < 1, the penalty is a combination of L1 and L2.
New in version 0.17: Regularization parameter *l1\_ratio* used in the Coordinate Descent solver.
**verbose**int, default=0
Whether to be verbose.
**shuffle**bool, default=False
If true, randomize the order of coordinates in the CD solver.
New in version 0.17: *shuffle* parameter used in the Coordinate Descent solver.
**regularization**{‘both’, ‘components’, ‘transformation’, None}, default=’both’
Select whether the regularization affects the components (H), the transformation (W), both or none of them.
New in version 0.24.
Deprecated since version 1.0: The `regularization` parameter is deprecated in 1.0 and will be removed in 1.2. Use `alpha_W` and `alpha_H` instead.
Attributes:
**components\_**ndarray of shape (n\_components, n\_features)
Factorization matrix, sometimes called ‘dictionary’.
**n\_components\_**int
The number of components. It is same as the `n_components` parameter if it was given. Otherwise, it will be same as the number of features.
**reconstruction\_err\_**float
Frobenius norm of the matrix difference, or beta-divergence, between the training data `X` and the reconstructed data `WH` from the fitted model.
**n\_iter\_**int
Actual number of iterations.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`DictionaryLearning`](sklearn.decomposition.dictionarylearning#sklearn.decomposition.DictionaryLearning "sklearn.decomposition.DictionaryLearning")
Find a dictionary that sparsely encodes data.
[`MiniBatchSparsePCA`](sklearn.decomposition.minibatchsparsepca#sklearn.decomposition.MiniBatchSparsePCA "sklearn.decomposition.MiniBatchSparsePCA")
Mini-batch Sparse Principal Components Analysis.
[`PCA`](sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA")
Principal component analysis.
[`SparseCoder`](sklearn.decomposition.sparsecoder#sklearn.decomposition.SparseCoder "sklearn.decomposition.SparseCoder")
Find a sparse representation of data from a fixed, precomputed dictionary.
[`SparsePCA`](sklearn.decomposition.sparsepca#sklearn.decomposition.SparsePCA "sklearn.decomposition.SparsePCA")
Sparse Principal Components Analysis.
[`TruncatedSVD`](sklearn.decomposition.truncatedsvd#sklearn.decomposition.TruncatedSVD "sklearn.decomposition.TruncatedSVD")
Dimensionality reduction using truncated SVD.
#### References
[1] [“Fast local algorithms for large scale nonnegative matrix and tensor factorizations”](https://doi.org/10.1587/transfun.E92.A.708) Cichocki, Andrzej, and P. H. A. N. Anh-Huy. IEICE transactions on fundamentals of electronics, communications and computer sciences 92.3: 708-721, 2009.
[2] [“Algorithms for nonnegative matrix factorization with the beta-divergence”](https://doi.org/10.1162/NECO_a_00168) Fevotte, C., & Idier, J. (2011). Neural Computation, 23(9).
#### Examples
```
>>> import numpy as np
>>> X = np.array([[1, 1], [2, 1], [3, 1.2], [4, 1], [5, 0.8], [6, 1]])
>>> from sklearn.decomposition import NMF
>>> model = NMF(n_components=2, init='random', random_state=0)
>>> W = model.fit_transform(X)
>>> H = model.components_
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.decomposition.NMF.fit "sklearn.decomposition.NMF.fit")(X[, y]) | Learn a NMF model for the data X. |
| [`fit_transform`](#sklearn.decomposition.NMF.fit_transform "sklearn.decomposition.NMF.fit_transform")(X[, y, W, H]) | Learn a NMF model for the data X and returns the transformed data. |
| [`get_feature_names_out`](#sklearn.decomposition.NMF.get_feature_names_out "sklearn.decomposition.NMF.get_feature_names_out")([input\_features]) | Get output feature names for transformation. |
| [`get_params`](#sklearn.decomposition.NMF.get_params "sklearn.decomposition.NMF.get_params")([deep]) | Get parameters for this estimator. |
| [`inverse_transform`](#sklearn.decomposition.NMF.inverse_transform "sklearn.decomposition.NMF.inverse_transform")(W) | Transform data back to its original space. |
| [`set_params`](#sklearn.decomposition.NMF.set_params "sklearn.decomposition.NMF.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.decomposition.NMF.transform "sklearn.decomposition.NMF.transform")(X) | Transform the data X according to the fitted NMF model. |
fit(*X*, *y=None*, *\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/decomposition/_nmf.py#L1701)
Learn a NMF model for the data X.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Training vector, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**Ignored
Not used, present for API consistency by convention.
**\*\*params**kwargs
Parameters (keyword arguments) and values passed to the fit\_transform instance.
Returns:
**self**object
Returns the instance itself.
fit\_transform(*X*, *y=None*, *W=None*, *H=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/decomposition/_nmf.py#L1563)
Learn a NMF model for the data X and returns the transformed data.
This is more efficient than calling fit followed by transform.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Training vector, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**Ignored
Not used, present for API consistency by convention.
**W**array-like of shape (n\_samples, n\_components)
If init=’custom’, it is used as initial guess for the solution.
**H**array-like of shape (n\_components, n\_features)
If init=’custom’, it is used as initial guess for the solution.
Returns:
**W**ndarray of shape (n\_samples, n\_components)
Transformed data.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L909)
Get output feature names for transformation.
Parameters:
**input\_features**array-like of str or None, default=None
Only used to validate feature names with the names seen in [`fit`](#sklearn.decomposition.NMF.fit "sklearn.decomposition.NMF.fit").
Returns:
**feature\_names\_out**ndarray of str objects
Transformed feature names.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
inverse\_transform(*W*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/decomposition/_nmf.py#L1749)
Transform data back to its original space.
New in version 0.18.
Parameters:
**W**{ndarray, sparse matrix} of shape (n\_samples, n\_components)
Transformed data matrix.
Returns:
**X**{ndarray, sparse matrix} of shape (n\_samples, n\_features)
Returns a data matrix of the original shape.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/decomposition/_nmf.py#L1725)
Transform the data X according to the fitted NMF model.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Training vector, where `n_samples` is the number of samples and `n_features` is the number of features.
Returns:
**W**ndarray of shape (n\_samples, n\_components)
Transformed data.
Examples using `sklearn.decomposition.NMF`
------------------------------------------
[Beta-divergence loss functions](../../auto_examples/decomposition/plot_beta_divergence#sphx-glr-auto-examples-decomposition-plot-beta-divergence-py)
[Faces dataset decompositions](../../auto_examples/decomposition/plot_faces_decomposition#sphx-glr-auto-examples-decomposition-plot-faces-decomposition-py)
[Topic extraction with Non-negative Matrix Factorization and Latent Dirichlet Allocation](../../auto_examples/applications/plot_topics_extraction_with_nmf_lda#sphx-glr-auto-examples-applications-plot-topics-extraction-with-nmf-lda-py)
[Selecting dimensionality reduction with Pipeline and GridSearchCV](../../auto_examples/compose/plot_compare_reduction#sphx-glr-auto-examples-compose-plot-compare-reduction-py)
| programming_docs |
scikit_learn sklearn.cross_decomposition.PLSRegression sklearn.cross\_decomposition.PLSRegression
==========================================
*class*sklearn.cross\_decomposition.PLSRegression(*n\_components=2*, *\**, *scale=True*, *max\_iter=500*, *tol=1e-06*, *copy=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cross_decomposition/_pls.py#L523)
PLS regression.
PLSRegression is also known as PLS2 or PLS1, depending on the number of targets.
Read more in the [User Guide](../cross_decomposition#cross-decomposition).
New in version 0.8.
Parameters:
**n\_components**int, default=2
Number of components to keep. Should be in `[1, min(n_samples,
n_features, n_targets)]`.
**scale**bool, default=True
Whether to scale `X` and `Y`.
**max\_iter**int, default=500
The maximum number of iterations of the power method when `algorithm='nipals'`. Ignored otherwise.
**tol**float, default=1e-06
The tolerance used as convergence criteria in the power method: the algorithm stops whenever the squared norm of `u_i - u_{i-1}` is less than `tol`, where `u` corresponds to the left singular vector.
**copy**bool, default=True
Whether to copy `X` and `Y` in [fit](https://scikit-learn.org/1.1/glossary.html#term-fit) before applying centering, and potentially scaling. If `False`, these operations will be done inplace, modifying both arrays.
Attributes:
**x\_weights\_**ndarray of shape (n\_features, n\_components)
The left singular vectors of the cross-covariance matrices of each iteration.
**y\_weights\_**ndarray of shape (n\_targets, n\_components)
The right singular vectors of the cross-covariance matrices of each iteration.
**x\_loadings\_**ndarray of shape (n\_features, n\_components)
The loadings of `X`.
**y\_loadings\_**ndarray of shape (n\_targets, n\_components)
The loadings of `Y`.
**x\_scores\_**ndarray of shape (n\_samples, n\_components)
The transformed training samples.
**y\_scores\_**ndarray of shape (n\_samples, n\_components)
The transformed training targets.
**x\_rotations\_**ndarray of shape (n\_features, n\_components)
The projection matrix used to transform `X`.
**y\_rotations\_**ndarray of shape (n\_features, n\_components)
The projection matrix used to transform `Y`.
[`coef_`](#sklearn.cross_decomposition.PLSRegression.coef_ "sklearn.cross_decomposition.PLSRegression.coef_")ndarray of shape (n\_features, n\_targets)
The coefficients of the linear model.
**intercept\_**ndarray of shape (n\_targets,)
The intercepts of the linear model such that `Y` is approximated as `Y = X @ coef_ + intercept_`.
New in version 1.1.
**n\_iter\_**list of shape (n\_components,)
Number of iterations of the power method, for each component.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`PLSCanonical`](sklearn.cross_decomposition.plscanonical#sklearn.cross_decomposition.PLSCanonical "sklearn.cross_decomposition.PLSCanonical")
Partial Least Squares transformer and regressor.
#### Examples
```
>>> from sklearn.cross_decomposition import PLSRegression
>>> X = [[0., 0., 1.], [1.,0.,0.], [2.,2.,2.], [2.,5.,4.]]
>>> Y = [[0.1, -0.2], [0.9, 1.1], [6.2, 5.9], [11.9, 12.3]]
>>> pls2 = PLSRegression(n_components=2)
>>> pls2.fit(X, Y)
PLSRegression()
>>> Y_pred = pls2.predict(X)
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.cross_decomposition.PLSRegression.fit "sklearn.cross_decomposition.PLSRegression.fit")(X, Y) | Fit model to data. |
| [`fit_transform`](#sklearn.cross_decomposition.PLSRegression.fit_transform "sklearn.cross_decomposition.PLSRegression.fit_transform")(X[, y]) | Learn and apply the dimension reduction on the train data. |
| [`get_feature_names_out`](#sklearn.cross_decomposition.PLSRegression.get_feature_names_out "sklearn.cross_decomposition.PLSRegression.get_feature_names_out")([input\_features]) | Get output feature names for transformation. |
| [`get_params`](#sklearn.cross_decomposition.PLSRegression.get_params "sklearn.cross_decomposition.PLSRegression.get_params")([deep]) | Get parameters for this estimator. |
| [`inverse_transform`](#sklearn.cross_decomposition.PLSRegression.inverse_transform "sklearn.cross_decomposition.PLSRegression.inverse_transform")(X[, Y]) | Transform data back to its original space. |
| [`predict`](#sklearn.cross_decomposition.PLSRegression.predict "sklearn.cross_decomposition.PLSRegression.predict")(X[, copy]) | Predict targets of given samples. |
| [`score`](#sklearn.cross_decomposition.PLSRegression.score "sklearn.cross_decomposition.PLSRegression.score")(X, y[, sample\_weight]) | Return the coefficient of determination of the prediction. |
| [`set_params`](#sklearn.cross_decomposition.PLSRegression.set_params "sklearn.cross_decomposition.PLSRegression.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.cross_decomposition.PLSRegression.transform "sklearn.cross_decomposition.PLSRegression.transform")(X[, Y, copy]) | Apply the dimension reduction. |
*property*coef\_
The coefficients of the linear model.
fit(*X*, *Y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cross_decomposition/_pls.py#L642)
Fit model to data.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training vectors, where `n_samples` is the number of samples and `n_features` is the number of predictors.
**Y**array-like of shape (n\_samples,) or (n\_samples, n\_targets)
Target vectors, where `n_samples` is the number of samples and `n_targets` is the number of response variables.
Returns:
**self**object
Fitted model.
fit\_transform(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cross_decomposition/_pls.py#L479)
Learn and apply the dimension reduction on the train data.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training vectors, where `n_samples` is the number of samples and `n_features` is the number of predictors.
**y**array-like of shape (n\_samples, n\_targets), default=None
Target vectors, where `n_samples` is the number of samples and `n_targets` is the number of response variables.
Returns:
**self**ndarray of shape (n\_samples, n\_components)
Return `x_scores` if `Y` is not given, `(x_scores, y_scores)` otherwise.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L909)
Get output feature names for transformation.
Parameters:
**input\_features**array-like of str or None, default=None
Only used to validate feature names with the names seen in [`fit`](#sklearn.cross_decomposition.PLSRegression.fit "sklearn.cross_decomposition.PLSRegression.fit").
Returns:
**feature\_names\_out**ndarray of str objects
Transformed feature names.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
inverse\_transform(*X*, *Y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cross_decomposition/_pls.py#L404)
Transform data back to its original space.
Parameters:
**X**array-like of shape (n\_samples, n\_components)
New data, where `n_samples` is the number of samples and `n_components` is the number of pls components.
**Y**array-like of shape (n\_samples, n\_components)
New target, where `n_samples` is the number of samples and `n_components` is the number of pls components.
Returns:
**X\_reconstructed**ndarray of shape (n\_samples, n\_features)
Return the reconstructed `X` data.
**Y\_reconstructed**ndarray of shape (n\_samples, n\_targets)
Return the reconstructed `X` target. Only returned when `Y` is given.
#### Notes
This transformation will only be exact if `n_components=n_features`.
predict(*X*, *copy=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cross_decomposition/_pls.py#L448)
Predict targets of given samples.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Samples.
**copy**bool, default=True
Whether to copy `X` and `Y`, or perform in-place normalization.
Returns:
**y\_pred**ndarray of shape (n\_samples,) or (n\_samples, n\_targets)
Returns predicted values.
#### Notes
This call requires the estimation of a matrix of shape `(n_features, n_targets)`, which may be an issue in high dimensional space.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L677)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares `((y_true - y_pred)** 2).sum()` and \(v\) is the total sum of squares `((y_true - y_true.mean()) ** 2).sum()`. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of `y`, disregarding the input features, would get a \(R^2\) score of 0.0.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape `(n_samples, n_samples_fitted)`, where `n_samples_fitted` is the number of samples used in the fitting for the estimator.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True values for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
\(R^2\) of `self.predict(X)` wrt. `y`.
#### Notes
The \(R^2\) score used when calling `score` on a regressor uses `multioutput='uniform_average'` from version 0.23 to keep consistent with default value of [`r2_score`](sklearn.metrics.r2_score#sklearn.metrics.r2_score "sklearn.metrics.r2_score"). This influences the `score` method of all the multioutput regressors (except for [`MultiOutputRegressor`](sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor")).
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*, *Y=None*, *copy=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cross_decomposition/_pls.py#L365)
Apply the dimension reduction.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Samples to transform.
**Y**array-like of shape (n\_samples, n\_targets), default=None
Target vectors.
**copy**bool, default=True
Whether to copy `X` and `Y`, or perform in-place normalization.
Returns:
**x\_scores, y\_scores**array-like or tuple of array-like
Return `x_scores` if `Y` is not given, `(x_scores, y_scores)` otherwise.
Examples using `sklearn.cross_decomposition.PLSRegression`
----------------------------------------------------------
[Compare cross decomposition methods](../../auto_examples/cross_decomposition/plot_compare_cross_decomposition#sphx-glr-auto-examples-cross-decomposition-plot-compare-cross-decomposition-py)
[Principal Component Regression vs Partial Least Squares Regression](../../auto_examples/cross_decomposition/plot_pcr_vs_pls#sphx-glr-auto-examples-cross-decomposition-plot-pcr-vs-pls-py)
scikit_learn sklearn.metrics.pairwise.cosine_distances sklearn.metrics.pairwise.cosine\_distances
==========================================
sklearn.metrics.pairwise.cosine\_distances(*X*, *Y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/pairwise.py#L957)
Compute cosine distance between samples in X and Y.
Cosine distance is defined as 1.0 minus the cosine similarity.
Read more in the [User Guide](../metrics#metrics).
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples\_X, n\_features)
Matrix `X`.
**Y**{array-like, sparse matrix} of shape (n\_samples\_Y, n\_features), default=None
Matrix `Y`.
Returns:
**distance matrix**ndarray of shape (n\_samples\_X, n\_samples\_Y)
Returns the cosine distance between samples in X and Y.
See also
[`cosine_similarity`](sklearn.metrics.pairwise.cosine_similarity#sklearn.metrics.pairwise.cosine_similarity "sklearn.metrics.pairwise.cosine_similarity")
Compute cosine similarity between samples in X and Y.
[`scipy.spatial.distance.cosine`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cosine.html#scipy.spatial.distance.cosine "(in SciPy v1.9.3)")
Dense matrices only.
scikit_learn sklearn.datasets.make_gaussian_quantiles sklearn.datasets.make\_gaussian\_quantiles
==========================================
sklearn.datasets.make\_gaussian\_quantiles(*\**, *mean=None*, *cov=1.0*, *n\_samples=100*, *n\_features=2*, *n\_classes=3*, *shuffle=True*, *random\_state=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/datasets/_samples_generator.py#L1604)
Generate isotropic Gaussian and label samples by quantile.
This classification dataset is constructed by taking a multi-dimensional standard normal distribution and defining classes separated by nested concentric multi-dimensional spheres such that roughly equal numbers of samples are in each class (quantiles of the \(\chi^2\) distribution).
Read more in the [User Guide](https://scikit-learn.org/1.1/datasets/sample_generators.html#sample-generators).
Parameters:
**mean**ndarray of shape (n\_features,), default=None
The mean of the multi-dimensional normal distribution. If None then use the origin (0, 0, …).
**cov**float, default=1.0
The covariance matrix will be this value times the unit matrix. This dataset only produces symmetric normal distributions.
**n\_samples**int, default=100
The total number of points equally divided among classes.
**n\_features**int, default=2
The number of features for each sample.
**n\_classes**int, default=3
The number of classes.
**shuffle**bool, default=True
Shuffle the samples.
**random\_state**int, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
Returns:
**X**ndarray of shape (n\_samples, n\_features)
The generated samples.
**y**ndarray of shape (n\_samples,)
The integer labels for quantile membership of each sample.
#### Notes
The dataset is from Zhu et al [1].
#### References
[1] 10. Zhu, H. Zou, S. Rosset, T. Hastie, “Multi-class AdaBoost”, 2009.
Examples using `sklearn.datasets.make_gaussian_quantiles`
---------------------------------------------------------
[Plot randomly generated classification dataset](../../auto_examples/datasets/plot_random_dataset#sphx-glr-auto-examples-datasets-plot-random-dataset-py)
[Multi-class AdaBoosted Decision Trees](../../auto_examples/ensemble/plot_adaboost_multiclass#sphx-glr-auto-examples-ensemble-plot-adaboost-multiclass-py)
[Two-class AdaBoost](../../auto_examples/ensemble/plot_adaboost_twoclass#sphx-glr-auto-examples-ensemble-plot-adaboost-twoclass-py)
scikit_learn sklearn.neighbors.KNeighborsClassifier sklearn.neighbors.KNeighborsClassifier
======================================
*class*sklearn.neighbors.KNeighborsClassifier(*n\_neighbors=5*, *\**, *weights='uniform'*, *algorithm='auto'*, *leaf\_size=30*, *p=2*, *metric='minkowski'*, *metric\_params=None*, *n\_jobs=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_classification.py#L22)
Classifier implementing the k-nearest neighbors vote.
Read more in the [User Guide](../neighbors#classification).
Parameters:
**n\_neighbors**int, default=5
Number of neighbors to use by default for [`kneighbors`](#sklearn.neighbors.KNeighborsClassifier.kneighbors "sklearn.neighbors.KNeighborsClassifier.kneighbors") queries.
**weights**{‘uniform’, ‘distance’} or callable, default=’uniform’
Weight function used in prediction. Possible values:
* ‘uniform’ : uniform weights. All points in each neighborhood are weighted equally.
* ‘distance’ : weight points by the inverse of their distance. in this case, closer neighbors of a query point will have a greater influence than neighbors which are further away.
* [callable] : a user-defined function which accepts an array of distances, and returns an array of the same shape containing the weights.
**algorithm**{‘auto’, ‘ball\_tree’, ‘kd\_tree’, ‘brute’}, default=’auto’
Algorithm used to compute the nearest neighbors:
* ‘ball\_tree’ will use [`BallTree`](sklearn.neighbors.balltree#sklearn.neighbors.BallTree "sklearn.neighbors.BallTree")
* ‘kd\_tree’ will use [`KDTree`](sklearn.neighbors.kdtree#sklearn.neighbors.KDTree "sklearn.neighbors.KDTree")
* ‘brute’ will use a brute-force search.
* ‘auto’ will attempt to decide the most appropriate algorithm based on the values passed to [`fit`](#sklearn.neighbors.KNeighborsClassifier.fit "sklearn.neighbors.KNeighborsClassifier.fit") method.
Note: fitting on sparse input will override the setting of this parameter, using brute force.
**leaf\_size**int, default=30
Leaf size passed to BallTree or KDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem.
**p**int, default=2
Power parameter for the Minkowski metric. When p = 1, this is equivalent to using manhattan\_distance (l1), and euclidean\_distance (l2) for p = 2. For arbitrary p, minkowski\_distance (l\_p) is used.
**metric**str or callable, default=’minkowski’
Metric to use for distance computation. Default is “minkowski”, which results in the standard Euclidean distance when p = 2. See the documentation of [scipy.spatial.distance](https://docs.scipy.org/doc/scipy/reference/spatial.distance.html) and the metrics listed in [`distance_metrics`](sklearn.metrics.pairwise.distance_metrics#sklearn.metrics.pairwise.distance_metrics "sklearn.metrics.pairwise.distance_metrics") for valid metric values.
If metric is “precomputed”, X is assumed to be a distance matrix and must be square during fit. X may be a [sparse graph](https://scikit-learn.org/1.1/glossary.html#term-sparse-graph), in which case only “nonzero” elements may be considered neighbors.
If metric is a callable function, it takes two arrays representing 1D vectors as inputs and must return one value indicating the distance between those vectors. This works for Scipy’s metrics, but is less efficient than passing the metric name as a string.
**metric\_params**dict, default=None
Additional keyword arguments for the metric function.
**n\_jobs**int, default=None
The number of parallel jobs to run for neighbors search. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details. Doesn’t affect [`fit`](#sklearn.neighbors.KNeighborsClassifier.fit "sklearn.neighbors.KNeighborsClassifier.fit") method.
Attributes:
**classes\_**array of shape (n\_classes,)
Class labels known to the classifier
**effective\_metric\_**str or callble
The distance metric used. It will be same as the `metric` parameter or a synonym of it, e.g. ‘euclidean’ if the `metric` parameter set to ‘minkowski’ and `p` parameter set to 2.
**effective\_metric\_params\_**dict
Additional keyword arguments for the metric function. For most metrics will be same with `metric_params` parameter, but may also contain the `p` parameter value if the `effective_metric_` attribute is set to ‘minkowski’.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
**n\_samples\_fit\_**int
Number of samples in the fitted data.
**outputs\_2d\_**bool
False when `y`’s shape is (n\_samples, ) or (n\_samples, 1) during fit otherwise True.
See also
[`RadiusNeighborsClassifier`](sklearn.neighbors.radiusneighborsclassifier#sklearn.neighbors.RadiusNeighborsClassifier "sklearn.neighbors.RadiusNeighborsClassifier")
Classifier based on neighbors within a fixed radius.
[`KNeighborsRegressor`](sklearn.neighbors.kneighborsregressor#sklearn.neighbors.KNeighborsRegressor "sklearn.neighbors.KNeighborsRegressor")
Regression based on k-nearest neighbors.
[`RadiusNeighborsRegressor`](sklearn.neighbors.radiusneighborsregressor#sklearn.neighbors.RadiusNeighborsRegressor "sklearn.neighbors.RadiusNeighborsRegressor")
Regression based on neighbors within a fixed radius.
[`NearestNeighbors`](sklearn.neighbors.nearestneighbors#sklearn.neighbors.NearestNeighbors "sklearn.neighbors.NearestNeighbors")
Unsupervised learner for implementing neighbor searches.
#### Notes
See [Nearest Neighbors](../neighbors#neighbors) in the online documentation for a discussion of the choice of `algorithm` and `leaf_size`.
Warning
Regarding the Nearest Neighbors algorithms, if it is found that two neighbors, neighbor `k+1` and `k`, have identical distances but different labels, the results will depend on the ordering of the training data.
<https://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm>
#### Examples
```
>>> X = [[0], [1], [2], [3]]
>>> y = [0, 0, 1, 1]
>>> from sklearn.neighbors import KNeighborsClassifier
>>> neigh = KNeighborsClassifier(n_neighbors=3)
>>> neigh.fit(X, y)
KNeighborsClassifier(...)
>>> print(neigh.predict([[1.1]]))
[0]
>>> print(neigh.predict_proba([[0.9]]))
[[0.666... 0.333...]]
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.neighbors.KNeighborsClassifier.fit "sklearn.neighbors.KNeighborsClassifier.fit")(X, y) | Fit the k-nearest neighbors classifier from the training dataset. |
| [`get_params`](#sklearn.neighbors.KNeighborsClassifier.get_params "sklearn.neighbors.KNeighborsClassifier.get_params")([deep]) | Get parameters for this estimator. |
| [`kneighbors`](#sklearn.neighbors.KNeighborsClassifier.kneighbors "sklearn.neighbors.KNeighborsClassifier.kneighbors")([X, n\_neighbors, return\_distance]) | Find the K-neighbors of a point. |
| [`kneighbors_graph`](#sklearn.neighbors.KNeighborsClassifier.kneighbors_graph "sklearn.neighbors.KNeighborsClassifier.kneighbors_graph")([X, n\_neighbors, mode]) | Compute the (weighted) graph of k-Neighbors for points in X. |
| [`predict`](#sklearn.neighbors.KNeighborsClassifier.predict "sklearn.neighbors.KNeighborsClassifier.predict")(X) | Predict the class labels for the provided data. |
| [`predict_proba`](#sklearn.neighbors.KNeighborsClassifier.predict_proba "sklearn.neighbors.KNeighborsClassifier.predict_proba")(X) | Return probability estimates for the test data X. |
| [`score`](#sklearn.neighbors.KNeighborsClassifier.score "sklearn.neighbors.KNeighborsClassifier.score")(X, y[, sample\_weight]) | Return the mean accuracy on the given test data and labels. |
| [`set_params`](#sklearn.neighbors.KNeighborsClassifier.set_params "sklearn.neighbors.KNeighborsClassifier.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_classification.py#L187)
Fit the k-nearest neighbors classifier from the training dataset.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features) or (n\_samples, n\_samples) if metric=’precomputed’
Training data.
**y**{array-like, sparse matrix} of shape (n\_samples,) or (n\_samples, n\_outputs)
Target values.
Returns:
**self**KNeighborsClassifier
The fitted k-nearest neighbors classifier.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
kneighbors(*X=None*, *n\_neighbors=None*, *return\_distance=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_base.py#L670)
Find the K-neighbors of a point.
Returns indices of and distances to the neighbors of each point.
Parameters:
**X**array-like, shape (n\_queries, n\_features), or (n\_queries, n\_indexed) if metric == ‘precomputed’, default=None
The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor.
**n\_neighbors**int, default=None
Number of neighbors required for each sample. The default is the value passed to the constructor.
**return\_distance**bool, default=True
Whether or not to return the distances.
Returns:
**neigh\_dist**ndarray of shape (n\_queries, n\_neighbors)
Array representing the lengths to points, only present if return\_distance=True.
**neigh\_ind**ndarray of shape (n\_queries, n\_neighbors)
Indices of the nearest points in the population matrix.
#### Examples
In the following example, we construct a NearestNeighbors class from an array representing our data set and ask who’s the closest point to [1,1,1]
```
>>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(n_neighbors=1)
>>> neigh.fit(samples)
NearestNeighbors(n_neighbors=1)
>>> print(neigh.kneighbors([[1., 1., 1.]]))
(array([[0.5]]), array([[2]]))
```
As you can see, it returns [[0.5]], and [[2]], which means that the element is at distance 0.5 and is the third element of samples (indexes start at 0). You can also query for multiple points:
```
>>> X = [[0., 1., 0.], [1., 0., 1.]]
>>> neigh.kneighbors(X, return_distance=False)
array([[1],
[2]]...)
```
kneighbors\_graph(*X=None*, *n\_neighbors=None*, *mode='connectivity'*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_base.py#L860)
Compute the (weighted) graph of k-Neighbors for points in X.
Parameters:
**X**array-like of shape (n\_queries, n\_features), or (n\_queries, n\_indexed) if metric == ‘precomputed’, default=None
The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. For `metric='precomputed'` the shape should be (n\_queries, n\_indexed). Otherwise the shape should be (n\_queries, n\_features).
**n\_neighbors**int, default=None
Number of neighbors for each sample. The default is the value passed to the constructor.
**mode**{‘connectivity’, ‘distance’}, default=’connectivity’
Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are distances between points, type of distance depends on the selected metric parameter in NearestNeighbors class.
Returns:
**A**sparse-matrix of shape (n\_queries, n\_samples\_fit)
`n_samples_fit` is the number of samples in the fitted data. `A[i, j]` gives the weight of the edge connecting `i` to `j`. The matrix is of CSR format.
See also
[`NearestNeighbors.radius_neighbors_graph`](sklearn.neighbors.nearestneighbors#sklearn.neighbors.NearestNeighbors.radius_neighbors_graph "sklearn.neighbors.NearestNeighbors.radius_neighbors_graph")
Compute the (weighted) graph of Neighbors for points in X.
#### Examples
```
>>> X = [[0], [3], [1]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(n_neighbors=2)
>>> neigh.fit(X)
NearestNeighbors(n_neighbors=2)
>>> A = neigh.kneighbors_graph(X)
>>> A.toarray()
array([[1., 0., 1.],
[0., 1., 1.],
[1., 0., 1.]])
```
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_classification.py#L209)
Predict the class labels for the provided data.
Parameters:
**X**array-like of shape (n\_queries, n\_features), or (n\_queries, n\_indexed) if metric == ‘precomputed’
Test samples.
Returns:
**y**ndarray of shape (n\_queries,) or (n\_queries, n\_outputs)
Class labels for each data sample.
predict\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_classification.py#L256)
Return probability estimates for the test data X.
Parameters:
**X**array-like of shape (n\_queries, n\_features), or (n\_queries, n\_indexed) if metric == ‘precomputed’
Test samples.
Returns:
**p**ndarray of shape (n\_queries, n\_classes), or a list of n\_outputs of such arrays if n\_outputs > 1.
The class probabilities of the input samples. Classes are ordered by lexicographic order.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L640)
Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True labels for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
Mean accuracy of `self.predict(X)` wrt. `y`.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.neighbors.KNeighborsClassifier`
-------------------------------------------------------
[Release Highlights for scikit-learn 0.24](../../auto_examples/release_highlights/plot_release_highlights_0_24_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-0-24-0-py)
[Classifier comparison](../../auto_examples/classification/plot_classifier_comparison#sphx-glr-auto-examples-classification-plot-classifier-comparison-py)
[Plot the decision boundaries of a VotingClassifier](../../auto_examples/ensemble/plot_voting_decision_regions#sphx-glr-auto-examples-ensemble-plot-voting-decision-regions-py)
[Caching nearest neighbors](../../auto_examples/neighbors/plot_caching_nearest_neighbors#sphx-glr-auto-examples-neighbors-plot-caching-nearest-neighbors-py)
[Comparing Nearest Neighbors with and without Neighborhood Components Analysis](../../auto_examples/neighbors/plot_nca_classification#sphx-glr-auto-examples-neighbors-plot-nca-classification-py)
[Dimensionality Reduction with Neighborhood Components Analysis](../../auto_examples/neighbors/plot_nca_dim_reduction#sphx-glr-auto-examples-neighbors-plot-nca-dim-reduction-py)
[Nearest Neighbors Classification](../../auto_examples/neighbors/plot_classification#sphx-glr-auto-examples-neighbors-plot-classification-py)
[Digits Classification Exercise](../../auto_examples/exercises/plot_digits_classification_exercise#sphx-glr-auto-examples-exercises-plot-digits-classification-exercise-py)
[Classification of text documents using sparse features](../../auto_examples/text/plot_document_classification_20newsgroups#sphx-glr-auto-examples-text-plot-document-classification-20newsgroups-py)
| programming_docs |
scikit_learn sklearn.utils.check_scalar sklearn.utils.check\_scalar
===========================
sklearn.utils.check\_scalar(*x*, *name*, *target\_type*, *\**, *min\_val=None*, *max\_val=None*, *include\_boundaries='both'*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/utils/validation.py#L1375)
Validate scalar parameters type and value.
Parameters:
**x**object
The scalar parameter to validate.
**name**str
The name of the parameter to be printed in error messages.
**target\_type**type or tuple
Acceptable data types for the parameter.
**min\_val**float or int, default=None
The minimum valid value the parameter can take. If None (default) it is implied that the parameter does not have a lower bound.
**max\_val**float or int, default=None
The maximum valid value the parameter can take. If None (default) it is implied that the parameter does not have an upper bound.
**include\_boundaries**{“left”, “right”, “both”, “neither”}, default=”both”
Whether the interval defined by `min_val` and `max_val` should include the boundaries. Possible choices are:
* `"left"`: only `min_val` is included in the valid interval. It is equivalent to the interval `[ min_val, max_val )`.
* `"right"`: only `max_val` is included in the valid interval. It is equivalent to the interval `( min_val, max_val ]`.
* `"both"`: `min_val` and `max_val` are included in the valid interval. It is equivalent to the interval `[ min_val, max_val ]`.
* `"neither"`: neither `min_val` nor `max_val` are included in the valid interval. It is equivalent to the interval `( min_val, max_val )`.
Returns:
**x**numbers.Number
The validated number.
Raises:
TypeError
If the parameter’s type does not match the desired type.
ValueError
If the parameter’s value violates the given bounds. If `min_val`, `max_val` and `include_boundaries` are inconsistent.
scikit_learn sklearn.datasets.fetch_olivetti_faces sklearn.datasets.fetch\_olivetti\_faces
=======================================
sklearn.datasets.fetch\_olivetti\_faces(*\**, *data\_home=None*, *shuffle=False*, *random\_state=0*, *download\_if\_missing=True*, *return\_X\_y=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/datasets/_olivetti_faces.py#L39)
Load the Olivetti faces data-set from AT&T (classification).
Download it if necessary.
| | |
| --- | --- |
| Classes | 40 |
| Samples total | 400 |
| Dimensionality | 4096 |
| Features | real, between 0 and 1 |
Read more in the [User Guide](https://scikit-learn.org/1.1/datasets/real_world.html#olivetti-faces-dataset).
Parameters:
**data\_home**str, default=None
Specify another download and cache folder for the datasets. By default all scikit-learn data is stored in ‘~/scikit\_learn\_data’ subfolders.
**shuffle**bool, default=False
If True the order of the dataset is shuffled to avoid having images of the same person grouped.
**random\_state**int, RandomState instance or None, default=0
Determines random number generation for dataset shuffling. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**download\_if\_missing**bool, default=True
If False, raise a IOError if the data is not locally available instead of trying to download the data from the source site.
**return\_X\_y**bool, default=False
If True, returns `(data, target)` instead of a `Bunch` object. See below for more information about the `data` and `target` object.
New in version 0.22.
Returns:
**data**[`Bunch`](sklearn.utils.bunch#sklearn.utils.Bunch "sklearn.utils.Bunch")
Dictionary-like object, with the following attributes.
data: ndarray, shape (400, 4096)
Each row corresponds to a ravelled face image of original size 64 x 64 pixels.
imagesndarray, shape (400, 64, 64)
Each row is a face image corresponding to one of the 40 subjects of the dataset.
targetndarray, shape (400,)
Labels associated to each face image. Those labels are ranging from 0-39 and correspond to the Subject IDs.
DESCRstr
Description of the modified Olivetti Faces Dataset.
**(data, target)**tuple if `return_X_y=True`
Tuple with the `data` and `target` objects described above.
New in version 0.22.
Examples using `sklearn.datasets.fetch_olivetti_faces`
------------------------------------------------------
[Online learning of a dictionary of parts of faces](../../auto_examples/cluster/plot_dict_face_patches#sphx-glr-auto-examples-cluster-plot-dict-face-patches-py)
[Faces dataset decompositions](../../auto_examples/decomposition/plot_faces_decomposition#sphx-glr-auto-examples-decomposition-plot-faces-decomposition-py)
[Pixel importances with a parallel forest of trees](../../auto_examples/ensemble/plot_forest_importances_faces#sphx-glr-auto-examples-ensemble-plot-forest-importances-faces-py)
[Face completion with a multi-output estimators](../../auto_examples/miscellaneous/plot_multioutput_face_completion#sphx-glr-auto-examples-miscellaneous-plot-multioutput-face-completion-py)
scikit_learn sklearn.covariance.EmpiricalCovariance sklearn.covariance.EmpiricalCovariance
======================================
*class*sklearn.covariance.EmpiricalCovariance(*\**, *store\_precision=True*, *assume\_centered=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_empirical_covariance.py#L100)
Maximum likelihood covariance estimator.
Read more in the [User Guide](../covariance#covariance).
Parameters:
**store\_precision**bool, default=True
Specifies if the estimated precision is stored.
**assume\_centered**bool, default=False
If True, data are not centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False (default), data are centered before computation.
Attributes:
**location\_**ndarray of shape (n\_features,)
Estimated location, i.e. the estimated mean.
**covariance\_**ndarray of shape (n\_features, n\_features)
Estimated covariance matrix
**precision\_**ndarray of shape (n\_features, n\_features)
Estimated pseudo-inverse matrix. (stored only if store\_precision is True)
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`EllipticEnvelope`](sklearn.covariance.ellipticenvelope#sklearn.covariance.EllipticEnvelope "sklearn.covariance.EllipticEnvelope")
An object for detecting outliers in a Gaussian distributed dataset.
[`GraphicalLasso`](sklearn.covariance.graphicallasso#sklearn.covariance.GraphicalLasso "sklearn.covariance.GraphicalLasso")
Sparse inverse covariance estimation with an l1-penalized estimator.
[`LedoitWolf`](sklearn.covariance.ledoitwolf#sklearn.covariance.LedoitWolf "sklearn.covariance.LedoitWolf")
LedoitWolf Estimator.
[`MinCovDet`](sklearn.covariance.mincovdet#sklearn.covariance.MinCovDet "sklearn.covariance.MinCovDet")
Minimum Covariance Determinant (robust estimator of covariance).
[`OAS`](sklearn.covariance.oas#sklearn.covariance.OAS "sklearn.covariance.OAS")
Oracle Approximating Shrinkage Estimator.
[`ShrunkCovariance`](sklearn.covariance.shrunkcovariance#sklearn.covariance.ShrunkCovariance "sklearn.covariance.ShrunkCovariance")
Covariance estimator with shrinkage.
#### Examples
```
>>> import numpy as np
>>> from sklearn.covariance import EmpiricalCovariance
>>> from sklearn.datasets import make_gaussian_quantiles
>>> real_cov = np.array([[.8, .3],
... [.3, .4]])
>>> rng = np.random.RandomState(0)
>>> X = rng.multivariate_normal(mean=[0, 0],
... cov=real_cov,
... size=500)
>>> cov = EmpiricalCovariance().fit(X)
>>> cov.covariance_
array([[0.7569..., 0.2818...],
[0.2818..., 0.3928...]])
>>> cov.location_
array([0.0622..., 0.0193...])
```
#### Methods
| | |
| --- | --- |
| [`error_norm`](#sklearn.covariance.EmpiricalCovariance.error_norm "sklearn.covariance.EmpiricalCovariance.error_norm")(comp\_cov[, norm, scaling, squared]) | Compute the Mean Squared Error between two covariance estimators. |
| [`fit`](#sklearn.covariance.EmpiricalCovariance.fit "sklearn.covariance.EmpiricalCovariance.fit")(X[, y]) | Fit the maximum likelihood covariance estimator to X. |
| [`get_params`](#sklearn.covariance.EmpiricalCovariance.get_params "sklearn.covariance.EmpiricalCovariance.get_params")([deep]) | Get parameters for this estimator. |
| [`get_precision`](#sklearn.covariance.EmpiricalCovariance.get_precision "sklearn.covariance.EmpiricalCovariance.get_precision")() | Getter for the precision matrix. |
| [`mahalanobis`](#sklearn.covariance.EmpiricalCovariance.mahalanobis "sklearn.covariance.EmpiricalCovariance.mahalanobis")(X) | Compute the squared Mahalanobis distances of given observations. |
| [`score`](#sklearn.covariance.EmpiricalCovariance.score "sklearn.covariance.EmpiricalCovariance.score")(X\_test[, y]) | Compute the log-likelihood of `X_test` under the estimated Gaussian model. |
| [`set_params`](#sklearn.covariance.EmpiricalCovariance.set_params "sklearn.covariance.EmpiricalCovariance.set_params")(\*\*params) | Set the parameters of this estimator. |
error\_norm(*comp\_cov*, *norm='frobenius'*, *scaling=True*, *squared=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_empirical_covariance.py#L267)
Compute the Mean Squared Error between two covariance estimators.
Parameters:
**comp\_cov**array-like of shape (n\_features, n\_features)
The covariance to compare with.
**norm**{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error `(comp_cov - self.covariance_)`.
**scaling**bool, default=True
If True (default), the squared error norm is divided by n\_features. If False, the squared error norm is not rescaled.
**squared**bool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned.
Returns:
**result**float
The Mean Squared Error (in the sense of the Frobenius norm) between `self` and `comp_cov` covariance estimators.
fit(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_empirical_covariance.py#L209)
Fit the maximum likelihood covariance estimator to X.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training data, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**self**object
Returns the instance itself.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
get\_precision()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_empirical_covariance.py#L195)
Getter for the precision matrix.
Returns:
**precision\_**array-like of shape (n\_features, n\_features)
The precision matrix associated to the current covariance object.
mahalanobis(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_empirical_covariance.py#L318)
Compute the squared Mahalanobis distances of given observations.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit.
Returns:
**dist**ndarray of shape (n\_samples,)
Squared Mahalanobis distances of the observations.
score(*X\_test*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_empirical_covariance.py#L236)
Compute the log-likelihood of `X_test` under the estimated Gaussian model.
The Gaussian model is defined by its mean and covariance matrix which are represented respectively by `self.location_` and `self.covariance_`.
Parameters:
**X\_test**array-like of shape (n\_samples, n\_features)
Test data of which we compute the likelihood, where `n_samples` is the number of samples and `n_features` is the number of features. `X_test` is assumed to be drawn from the same distribution than the data used in fit (including centering).
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**res**float
The log-likelihood of `X_test` with `self.location_` and `self.covariance_` as estimators of the Gaussian model mean and covariance matrix respectively.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.covariance.EmpiricalCovariance`
-------------------------------------------------------
[Robust covariance estimation and Mahalanobis distances relevance](../../auto_examples/covariance/plot_mahalanobis_distances#sphx-glr-auto-examples-covariance-plot-mahalanobis-distances-py)
[Robust vs Empirical covariance estimate](../../auto_examples/covariance/plot_robust_vs_empirical_covariance#sphx-glr-auto-examples-covariance-plot-robust-vs-empirical-covariance-py)
[Shrinkage covariance estimation: LedoitWolf vs OAS and max-likelihood](../../auto_examples/covariance/plot_covariance_estimation#sphx-glr-auto-examples-covariance-plot-covariance-estimation-py)
scikit_learn sklearn.utils.deprecated sklearn.utils.deprecated
========================
sklearn.utils.deprecated(*extra=''*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/utils/deprecation.py#L8)
Decorator to mark a function or class as deprecated.
Issue a warning when the function is called/the class is instantiated and adds a warning to the docstring.
The optional extra argument will be appended to the deprecation message and the docstring. Note: to use this with the default value for extra, put in an empty of parentheses:
```
>>> from sklearn.utils import deprecated
>>> deprecated()
<sklearn.utils.deprecation.deprecated object at ...>
```
```
>>> @deprecated()
... def some_function(): pass
```
Parameters:
**extra**str, default=’’
To be added to the deprecation messages.
scikit_learn sklearn.multioutput.MultiOutputClassifier sklearn.multioutput.MultiOutputClassifier
=========================================
*class*sklearn.multioutput.MultiOutputClassifier(*estimator*, *\**, *n\_jobs=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/multioutput.py#L338)
Multi target classification.
This strategy consists of fitting one classifier per target. This is a simple strategy for extending classifiers that do not natively support multi-target classification.
Parameters:
**estimator**estimator object
An estimator object implementing [fit](https://scikit-learn.org/1.1/glossary.html#term-fit), [score](https://scikit-learn.org/1.1/glossary.html#term-score) and [predict\_proba](https://scikit-learn.org/1.1/glossary.html#term-predict_proba).
**n\_jobs**int or None, optional (default=None)
The number of jobs to run in parallel. [`fit`](#sklearn.multioutput.MultiOutputClassifier.fit "sklearn.multioutput.MultiOutputClassifier.fit"), [`predict`](#sklearn.multioutput.MultiOutputClassifier.predict "sklearn.multioutput.MultiOutputClassifier.predict") and [`partial_fit`](#sklearn.multioutput.MultiOutputClassifier.partial_fit "sklearn.multioutput.MultiOutputClassifier.partial_fit") (if supported by the passed estimator) will be parallelized for each target.
When individual estimators are fast to train or predict, using `n_jobs > 1` can result in slower performance due to the parallelism overhead.
`None` means `1` unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all available processes / threads. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
Changed in version 0.20: `n_jobs` default changed from `1` to `None`.
Attributes:
**classes\_**ndarray of shape (n\_classes,)
Class labels.
**estimators\_**list of `n_output` estimators
Estimators used for predictions.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Only defined if the underlying `estimator` exposes such an attribute when fit.
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Only defined if the underlying estimators expose such an attribute when fit.
New in version 1.0.
See also
[`ClassifierChain`](sklearn.multioutput.classifierchain#sklearn.multioutput.ClassifierChain "sklearn.multioutput.ClassifierChain")
A multi-label model that arranges binary classifiers into a chain.
[`MultiOutputRegressor`](sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor")
Fits one regressor per target variable.
#### Examples
```
>>> import numpy as np
>>> from sklearn.datasets import make_multilabel_classification
>>> from sklearn.multioutput import MultiOutputClassifier
>>> from sklearn.linear_model import LogisticRegression
>>> X, y = make_multilabel_classification(n_classes=3, random_state=0)
>>> clf = MultiOutputClassifier(LogisticRegression()).fit(X, y)
>>> clf.predict(X[-2:])
array([[1, 1, 1],
[1, 0, 1]])
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.multioutput.MultiOutputClassifier.fit "sklearn.multioutput.MultiOutputClassifier.fit")(X, Y[, sample\_weight]) | Fit the model to data matrix X and targets Y. |
| [`get_params`](#sklearn.multioutput.MultiOutputClassifier.get_params "sklearn.multioutput.MultiOutputClassifier.get_params")([deep]) | Get parameters for this estimator. |
| [`partial_fit`](#sklearn.multioutput.MultiOutputClassifier.partial_fit "sklearn.multioutput.MultiOutputClassifier.partial_fit")(X, y[, classes, sample\_weight]) | Incrementally fit a separate model for each class output. |
| [`predict`](#sklearn.multioutput.MultiOutputClassifier.predict "sklearn.multioutput.MultiOutputClassifier.predict")(X) | Predict multi-output variable using model for each target variable. |
| [`predict_proba`](#sklearn.multioutput.MultiOutputClassifier.predict_proba "sklearn.multioutput.MultiOutputClassifier.predict_proba")(X) | Return prediction probabilities for each class of each output. |
| [`score`](#sklearn.multioutput.MultiOutputClassifier.score "sklearn.multioutput.MultiOutputClassifier.score")(X, y) | Return the mean accuracy on the given test data and labels. |
| [`set_params`](#sklearn.multioutput.MultiOutputClassifier.set_params "sklearn.multioutput.MultiOutputClassifier.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *Y*, *sample\_weight=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/multioutput.py#L409)
Fit the model to data matrix X and targets Y.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The input data.
**Y**array-like of shape (n\_samples, n\_classes)
The target values.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights. If `None`, then samples are equally weighted. Only supported if the underlying classifier supports sample weights.
**\*\*fit\_params**dict of string -> object
Parameters passed to the `estimator.fit` method of each step.
New in version 0.23.
Returns:
**self**object
Returns a fitted instance.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
partial\_fit(*X*, *y*, *classes=None*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/multioutput.py#L87)
Incrementally fit a separate model for each class output.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The input data.
**y**{array-like, sparse matrix} of shape (n\_samples, n\_outputs)
Multi-output targets.
**classes**list of ndarray of shape (n\_outputs,), default=None
Each array is unique classes for one output in str/int. Can be obtained via `[np.unique(y[:, i]) for i in range(y.shape[1])]`, where `y` is the target matrix of the entire dataset. This argument is required for the first call to partial\_fit and can be omitted in the subsequent calls. Note that `y` doesn’t need to contain all labels in `classes`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights. If `None`, then samples are equally weighted. Only supported if the underlying regressor supports sample weights.
Returns:
**self**object
Returns a fitted instance.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/multioutput.py#L216)
Predict multi-output variable using model for each target variable.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The input data.
Returns:
**y**{array-like, sparse matrix} of shape (n\_samples, n\_outputs)
Multi-output targets predicted across multiple predictors. Note: Separate models are generated for each predictor.
predict\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/multioutput.py#L450)
Return prediction probabilities for each class of each output.
This method will raise a `ValueError` if any of the estimators do not have `predict_proba`.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The input data.
Returns:
**p**array of shape (n\_samples, n\_classes), or a list of n\_outputs such arrays if n\_outputs > 1.
The class probabilities of the input samples. The order of the classes corresponds to that in the attribute [classes\_](https://scikit-learn.org/1.1/glossary.html#term-classes_).
Changed in version 0.19: This function now returns a list of arrays where the length of the list is `n_outputs`, and each array is (`n_samples`, `n_classes`) for that particular output.
score(*X*, *y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/multioutput.py#L478)
Return the mean accuracy on the given test data and labels.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples.
**y**array-like of shape (n\_samples, n\_outputs)
True values for X.
Returns:
**scores**float
Mean accuracy of predicted target versus true target.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
| programming_docs |
scikit_learn sklearn.decomposition.MiniBatchDictionaryLearning sklearn.decomposition.MiniBatchDictionaryLearning
=================================================
*class*sklearn.decomposition.MiniBatchDictionaryLearning(*n\_components=None*, *\**, *alpha=1*, *n\_iter='deprecated'*, *max\_iter=None*, *fit\_algorithm='lars'*, *n\_jobs=None*, *batch\_size='warn'*, *shuffle=True*, *dict\_init=None*, *transform\_algorithm='omp'*, *transform\_n\_nonzero\_coefs=None*, *transform\_alpha=None*, *verbose=False*, *split\_sign=False*, *random\_state=None*, *positive\_code=False*, *positive\_dict=False*, *transform\_max\_iter=1000*, *callback=None*, *tol=0.001*, *max\_no\_improvement=10*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/decomposition/_dict_learning.py#L1731)
Mini-batch dictionary learning.
Finds a dictionary (a set of atoms) that performs well at sparsely encoding the fitted data.
Solves the optimization problem:
```
(U^*,V^*) = argmin 0.5 || X - U V ||_Fro^2 + alpha * || U ||_1,1
(U,V)
with || V_k ||_2 <= 1 for all 0 <= k < n_components
```
||.||\_Fro stands for the Frobenius norm and ||.||\_1,1 stands for the entry-wise matrix norm which is the sum of the absolute values of all the entries in the matrix.
Read more in the [User Guide](../decomposition#dictionarylearning).
Parameters:
**n\_components**int, default=None
Number of dictionary elements to extract.
**alpha**float, default=1
Sparsity controlling parameter.
**n\_iter**int, default=1000
Total number of iterations over data batches to perform.
Deprecated since version 1.1: `n_iter` is deprecated in 1.1 and will be removed in 1.3. Use `max_iter` instead.
**max\_iter**int, default=None
Maximum number of iterations over the complete dataset before stopping independently of any early stopping criterion heuristics. If `max_iter` is not None, `n_iter` is ignored.
New in version 1.1.
**fit\_algorithm**{‘lars’, ‘cd’}, default=’lars’
The algorithm used:
* `'lars'`: uses the least angle regression method to solve the lasso problem (`linear_model.lars_path`)
* `'cd'`: uses the coordinate descent method to compute the Lasso solution (`linear_model.Lasso`). Lars will be faster if the estimated components are sparse.
**n\_jobs**int, default=None
Number of parallel jobs to run. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
**batch\_size**int, default=3
Number of samples in each mini-batch.
**shuffle**bool, default=True
Whether to shuffle the samples before forming batches.
**dict\_init**ndarray of shape (n\_components, n\_features), default=None
Initial value of the dictionary for warm restart scenarios.
**transform\_algorithm**{‘lasso\_lars’, ‘lasso\_cd’, ‘lars’, ‘omp’, ‘threshold’}, default=’omp’
Algorithm used to transform the data:
* `'lars'`: uses the least angle regression method (`linear_model.lars_path`);
* `'lasso_lars'`: uses Lars to compute the Lasso solution.
* `'lasso_cd'`: uses the coordinate descent method to compute the Lasso solution (`linear_model.Lasso`). `'lasso_lars'` will be faster if the estimated components are sparse.
* `'omp'`: uses orthogonal matching pursuit to estimate the sparse solution.
* `'threshold'`: squashes to zero all coefficients less than alpha from the projection `dictionary * X'`.
**transform\_n\_nonzero\_coefs**int, default=None
Number of nonzero coefficients to target in each column of the solution. This is only used by `algorithm='lars'` and `algorithm='omp'`. If `None`, then `transform_n_nonzero_coefs=int(n_features / 10)`.
**transform\_alpha**float, default=None
If `algorithm='lasso_lars'` or `algorithm='lasso_cd'`, `alpha` is the penalty applied to the L1 norm. If `algorithm='threshold'`, `alpha` is the absolute value of the threshold below which coefficients will be squashed to zero. If `None`, defaults to `alpha`.
**verbose**bool or int, default=False
To control the verbosity of the procedure.
**split\_sign**bool, default=False
Whether to split the sparse feature vector into the concatenation of its negative part and its positive part. This can improve the performance of downstream classifiers.
**random\_state**int, RandomState instance or None, default=None
Used for initializing the dictionary when `dict_init` is not specified, randomly shuffling the data when `shuffle` is set to `True`, and updating the dictionary. Pass an int for reproducible results across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**positive\_code**bool, default=False
Whether to enforce positivity when finding the code.
New in version 0.20.
**positive\_dict**bool, default=False
Whether to enforce positivity when finding the dictionary.
New in version 0.20.
**transform\_max\_iter**int, default=1000
Maximum number of iterations to perform if `algorithm='lasso_cd'` or `'lasso_lars'`.
New in version 0.22.
**callback**callable, default=None
A callable that gets invoked at the end of each iteration.
New in version 1.1.
**tol**float, default=1e-3
Control early stopping based on the norm of the differences in the dictionary between 2 steps. Used only if `max_iter` is not None.
To disable early stopping based on changes in the dictionary, set `tol` to 0.0.
New in version 1.1.
**max\_no\_improvement**int, default=10
Control early stopping based on the consecutive number of mini batches that does not yield an improvement on the smoothed cost function. Used only if `max_iter` is not None.
To disable convergence detection based on cost function, set `max_no_improvement` to None.
New in version 1.1.
Attributes:
**components\_**ndarray of shape (n\_components, n\_features)
Components extracted from the data.
[`inner_stats_`](#sklearn.decomposition.MiniBatchDictionaryLearning.inner_stats_ "sklearn.decomposition.MiniBatchDictionaryLearning.inner_stats_")tuple of (A, B) ndarrays
DEPRECATED: The attribute `inner_stats_` is deprecated in 1.1 and will be removed in 1.3.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
**n\_iter\_**int
Number of iterations over the full dataset.
[`iter_offset_`](#sklearn.decomposition.MiniBatchDictionaryLearning.iter_offset_ "sklearn.decomposition.MiniBatchDictionaryLearning.iter_offset_")int
DEPRECATED: The attribute `iter_offset_` is deprecated in 1.1 and will be removed in 1.3.
[`random_state_`](#sklearn.decomposition.MiniBatchDictionaryLearning.random_state_ "sklearn.decomposition.MiniBatchDictionaryLearning.random_state_")RandomState instance
DEPRECATED: The attribute `random_state_` is deprecated in 1.1 and will be removed in 1.3.
**n\_steps\_**int
Number of mini-batches processed.
New in version 1.1.
See also
[`DictionaryLearning`](sklearn.decomposition.dictionarylearning#sklearn.decomposition.DictionaryLearning "sklearn.decomposition.DictionaryLearning")
Find a dictionary that sparsely encodes data.
[`MiniBatchSparsePCA`](sklearn.decomposition.minibatchsparsepca#sklearn.decomposition.MiniBatchSparsePCA "sklearn.decomposition.MiniBatchSparsePCA")
Mini-batch Sparse Principal Components Analysis.
[`SparseCoder`](sklearn.decomposition.sparsecoder#sklearn.decomposition.SparseCoder "sklearn.decomposition.SparseCoder")
Find a sparse representation of data from a fixed, precomputed dictionary.
[`SparsePCA`](sklearn.decomposition.sparsepca#sklearn.decomposition.SparsePCA "sklearn.decomposition.SparsePCA")
Sparse Principal Components Analysis.
#### References
J. Mairal, F. Bach, J. Ponce, G. Sapiro, 2009: Online dictionary learning for sparse coding (<https://www.di.ens.fr/sierra/pdfs/icml09.pdf>)
#### Examples
```
>>> import numpy as np
>>> from sklearn.datasets import make_sparse_coded_signal
>>> from sklearn.decomposition import MiniBatchDictionaryLearning
>>> X, dictionary, code = make_sparse_coded_signal(
... n_samples=100, n_components=15, n_features=20, n_nonzero_coefs=10,
... random_state=42, data_transposed=False)
>>> dict_learner = MiniBatchDictionaryLearning(
... n_components=15, batch_size=3, transform_algorithm='lasso_lars',
... transform_alpha=0.1, random_state=42)
>>> X_transformed = dict_learner.fit_transform(X)
```
We can check the level of sparsity of `X_transformed`:
```
>>> np.mean(X_transformed == 0)
0.38...
```
We can compare the average squared euclidean norm of the reconstruction error of the sparse coded signal relative to the squared euclidean norm of the original signal:
```
>>> X_hat = X_transformed @ dict_learner.components_
>>> np.mean(np.sum((X_hat - X) ** 2, axis=1) / np.sum(X ** 2, axis=1))
0.059...
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.decomposition.MiniBatchDictionaryLearning.fit "sklearn.decomposition.MiniBatchDictionaryLearning.fit")(X[, y]) | Fit the model from data in X. |
| [`fit_transform`](#sklearn.decomposition.MiniBatchDictionaryLearning.fit_transform "sklearn.decomposition.MiniBatchDictionaryLearning.fit_transform")(X[, y]) | Fit to data, then transform it. |
| [`get_feature_names_out`](#sklearn.decomposition.MiniBatchDictionaryLearning.get_feature_names_out "sklearn.decomposition.MiniBatchDictionaryLearning.get_feature_names_out")([input\_features]) | Get output feature names for transformation. |
| [`get_params`](#sklearn.decomposition.MiniBatchDictionaryLearning.get_params "sklearn.decomposition.MiniBatchDictionaryLearning.get_params")([deep]) | Get parameters for this estimator. |
| [`partial_fit`](#sklearn.decomposition.MiniBatchDictionaryLearning.partial_fit "sklearn.decomposition.MiniBatchDictionaryLearning.partial_fit")(X[, y, iter\_offset]) | Update the model using the data in X as a mini-batch. |
| [`set_params`](#sklearn.decomposition.MiniBatchDictionaryLearning.set_params "sklearn.decomposition.MiniBatchDictionaryLearning.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.decomposition.MiniBatchDictionaryLearning.transform "sklearn.decomposition.MiniBatchDictionaryLearning.transform")(X) | Encode the data as a sparse combination of the dictionary atoms. |
fit(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/decomposition/_dict_learning.py#L2224)
Fit the model from data in X.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training vector, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**self**object
Returns the instance itself.
fit\_transform(*X*, *y=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L839)
Fit to data, then transform it.
Fits transformer to `X` and `y` with optional parameters `fit_params` and returns a transformed version of `X`.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs), default=None
Target values (None for unsupervised transformations).
**\*\*fit\_params**dict
Additional fit parameters.
Returns:
**X\_new**ndarray array of shape (n\_samples, n\_features\_new)
Transformed array.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L909)
Get output feature names for transformation.
Parameters:
**input\_features**array-like of str or None, default=None
Only used to validate feature names with the names seen in [`fit`](#sklearn.decomposition.MiniBatchDictionaryLearning.fit "sklearn.decomposition.MiniBatchDictionaryLearning.fit").
Returns:
**feature\_names\_out**ndarray of str objects
Transformed feature names.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
*property*inner\_stats\_
DEPRECATED: The attribute `inner_stats_` is deprecated in 1.1 and will be removed in 1.3.
*property*iter\_offset\_
DEPRECATED: The attribute `iter_offset_` is deprecated in 1.1 and will be removed in 1.3.
partial\_fit(*X*, *y=None*, *iter\_offset='deprecated'*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/decomposition/_dict_learning.py#L2343)
Update the model using the data in X as a mini-batch.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training vector, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**Ignored
Not used, present for API consistency by convention.
**iter\_offset**int, default=None
The number of iteration on data batches that has been performed before this call to `partial_fit`. This is optional: if no number is passed, the memory of the object is used.
Deprecated since version 1.1: `iter_offset` will be removed in 1.3.
Returns:
**self**object
Return the instance itself.
*property*random\_state\_
DEPRECATED: The attribute `random_state_` is deprecated in 1.1 and will be removed in 1.3.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/decomposition/_dict_learning.py#L1217)
Encode the data as a sparse combination of the dictionary atoms.
Coding method is determined by the object parameter `transform_algorithm`.
Parameters:
**X**ndarray of shape (n\_samples, n\_features)
Test data to be transformed, must have the same number of features as the data used to train the model.
Returns:
**X\_new**ndarray of shape (n\_samples, n\_components)
Transformed data.
Examples using `sklearn.decomposition.MiniBatchDictionaryLearning`
------------------------------------------------------------------
[Faces dataset decompositions](../../auto_examples/decomposition/plot_faces_decomposition#sphx-glr-auto-examples-decomposition-plot-faces-decomposition-py)
[Image denoising using dictionary learning](../../auto_examples/decomposition/plot_image_denoising#sphx-glr-auto-examples-decomposition-plot-image-denoising-py)
scikit_learn sklearn.feature_extraction.FeatureHasher sklearn.feature\_extraction.FeatureHasher
=========================================
*class*sklearn.feature\_extraction.FeatureHasher(*n\_features=1048576*, *\**, *input\_type='dict'*, *dtype=<class 'numpy.float64'>*, *alternate\_sign=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_extraction/_hash.py#L18)
Implements feature hashing, aka the hashing trick.
This class turns sequences of symbolic feature names (strings) into scipy.sparse matrices, using a hash function to compute the matrix column corresponding to a name. The hash function employed is the signed 32-bit version of Murmurhash3.
Feature names of type byte string are used as-is. Unicode strings are converted to UTF-8 first, but no Unicode normalization is done. Feature values must be (finite) numbers.
This class is a low-memory alternative to DictVectorizer and CountVectorizer, intended for large-scale (online) learning and situations where memory is tight, e.g. when running prediction code on embedded devices.
Read more in the [User Guide](../feature_extraction#feature-hashing).
New in version 0.13.
Parameters:
**n\_features**int, default=2\*\*20
The number of features (columns) in the output matrices. Small numbers of features are likely to cause hash collisions, but large numbers will cause larger coefficient dimensions in linear learners.
**input\_type**str, default=’dict’
Choose a string from {‘dict’, ‘pair’, ‘string’}. Either “dict” (the default) to accept dictionaries over (feature\_name, value); “pair” to accept pairs of (feature\_name, value); or “string” to accept single strings. feature\_name should be a string, while value should be a number. In the case of “string”, a value of 1 is implied. The feature\_name is hashed to find the appropriate column for the feature. The value’s sign might be flipped in the output (but see non\_negative, below).
**dtype**numpy dtype, default=np.float64
The type of feature values. Passed to scipy.sparse matrix constructors as the dtype argument. Do not set this to bool, np.boolean or any unsigned integer type.
**alternate\_sign**bool, default=True
When True, an alternating sign is added to the features as to approximately conserve the inner product in the hashed space even for small n\_features. This approach is similar to sparse random projection.
Changed in version 0.19: `alternate_sign` replaces the now deprecated `non_negative` parameter.
See also
[`DictVectorizer`](sklearn.feature_extraction.dictvectorizer#sklearn.feature_extraction.DictVectorizer "sklearn.feature_extraction.DictVectorizer")
Vectorizes string-valued features using a hash table.
[`sklearn.preprocessing.OneHotEncoder`](sklearn.preprocessing.onehotencoder#sklearn.preprocessing.OneHotEncoder "sklearn.preprocessing.OneHotEncoder")
Handles nominal/categorical features.
#### Examples
```
>>> from sklearn.feature_extraction import FeatureHasher
>>> h = FeatureHasher(n_features=10)
>>> D = [{'dog': 1, 'cat':2, 'elephant':4},{'dog': 2, 'run': 5}]
>>> f = h.transform(D)
>>> f.toarray()
array([[ 0., 0., -4., -1., 0., 0., 0., 0., 0., 2.],
[ 0., 0., 0., -2., -5., 0., 0., 0., 0., 0.]])
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.feature_extraction.FeatureHasher.fit "sklearn.feature_extraction.FeatureHasher.fit")([X, y]) | No-op. |
| [`fit_transform`](#sklearn.feature_extraction.FeatureHasher.fit_transform "sklearn.feature_extraction.FeatureHasher.fit_transform")(X[, y]) | Fit to data, then transform it. |
| [`get_params`](#sklearn.feature_extraction.FeatureHasher.get_params "sklearn.feature_extraction.FeatureHasher.get_params")([deep]) | Get parameters for this estimator. |
| [`set_params`](#sklearn.feature_extraction.FeatureHasher.set_params "sklearn.feature_extraction.FeatureHasher.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.feature_extraction.FeatureHasher.transform "sklearn.feature_extraction.FeatureHasher.transform")(raw\_X) | Transform a sequence of instances to a scipy.sparse matrix. |
fit(*X=None*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_extraction/_hash.py#L114)
No-op.
This method doesn’t do anything. It exists purely for compatibility with the scikit-learn transformer API.
Parameters:
**X**Ignored
Not used, present here for API consistency by convention.
**y**Ignored
Not used, present here for API consistency by convention.
Returns:
**self**object
FeatureHasher class instance.
fit\_transform(*X*, *y=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L839)
Fit to data, then transform it.
Fits transformer to `X` and `y` with optional parameters `fit_params` and returns a transformed version of `X`.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs), default=None
Target values (None for unsupervised transformations).
**\*\*fit\_params**dict
Additional fit parameters.
Returns:
**X\_new**ndarray array of shape (n\_samples, n\_features\_new)
Transformed array.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*raw\_X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_extraction/_hash.py#L137)
Transform a sequence of instances to a scipy.sparse matrix.
Parameters:
**raw\_X**iterable over iterable over raw features, length = n\_samples
Samples. Each sample must be iterable an (e.g., a list or tuple) containing/generating feature names (and optionally values, see the input\_type constructor argument) which will be hashed. raw\_X need not support the len function, so it can be the result of a generator; n\_samples is determined on the fly.
Returns:
**X**sparse matrix of shape (n\_samples, n\_features)
Feature matrix, for use with estimators or further transformers.
Examples using `sklearn.feature_extraction.FeatureHasher`
---------------------------------------------------------
[FeatureHasher and DictVectorizer Comparison](../../auto_examples/text/plot_hashing_vs_dict_vectorizer#sphx-glr-auto-examples-text-plot-hashing-vs-dict-vectorizer-py)
| programming_docs |
scikit_learn sklearn.metrics.dcg_score sklearn.metrics.dcg\_score
==========================
sklearn.metrics.dcg\_score(*y\_true*, *y\_score*, *\**, *k=None*, *log\_base=2*, *sample\_weight=None*, *ignore\_ties=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/_ranking.py#L1384)
Compute Discounted Cumulative Gain.
Sum the true scores ranked in the order induced by the predicted scores, after applying a logarithmic discount.
This ranking metric yields a high value if true labels are ranked high by `y_score`.
Usually the Normalized Discounted Cumulative Gain (NDCG, computed by ndcg\_score) is preferred.
Parameters:
**y\_true**ndarray of shape (n\_samples, n\_labels)
True targets of multilabel classification, or true scores of entities to be ranked.
**y\_score**ndarray of shape (n\_samples, n\_labels)
Target scores, can either be probability estimates, confidence values, or non-thresholded measure of decisions (as returned by “decision\_function” on some classifiers).
**k**int, default=None
Only consider the highest k scores in the ranking. If None, use all outputs.
**log\_base**float, default=2
Base of the logarithm used for the discount. A low value means a sharper discount (top results are more important).
**sample\_weight**ndarray of shape (n\_samples,), default=None
Sample weights. If `None`, all samples are given the same weight.
**ignore\_ties**bool, default=False
Assume that there are no ties in y\_score (which is likely to be the case if y\_score is continuous) for efficiency gains.
Returns:
**discounted\_cumulative\_gain**float
The averaged sample DCG scores.
See also
[`ndcg_score`](sklearn.metrics.ndcg_score#sklearn.metrics.ndcg_score "sklearn.metrics.ndcg_score")
The Discounted Cumulative Gain divided by the Ideal Discounted Cumulative Gain (the DCG obtained for a perfect ranking), in order to have a score between 0 and 1.
#### References
[Wikipedia entry for Discounted Cumulative Gain](https://en.wikipedia.org/wiki/Discounted_cumulative_gain).
Jarvelin, K., & Kekalainen, J. (2002). Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems (TOIS), 20(4), 422-446.
Wang, Y., Wang, L., Li, Y., He, D., Chen, W., & Liu, T. Y. (2013, May). A theoretical analysis of NDCG ranking measures. In Proceedings of the 26th Annual Conference on Learning Theory (COLT 2013).
McSherry, F., & Najork, M. (2008, March). Computing information retrieval performance measures efficiently in the presence of tied scores. In European conference on information retrieval (pp. 414-421). Springer, Berlin, Heidelberg.
#### Examples
```
>>> import numpy as np
>>> from sklearn.metrics import dcg_score
>>> # we have groud-truth relevance of some answers to a query:
>>> true_relevance = np.asarray([[10, 0, 0, 1, 5]])
>>> # we predict scores for the answers
>>> scores = np.asarray([[.1, .2, .3, 4, 70]])
>>> dcg_score(true_relevance, scores)
9.49...
>>> # we can set k to truncate the sum; only top k answers contribute
>>> dcg_score(true_relevance, scores, k=2)
5.63...
>>> # now we have some ties in our prediction
>>> scores = np.asarray([[1, 0, 0, 0, 1]])
>>> # by default ties are averaged, so here we get the average true
>>> # relevance of our top predictions: (10 + 5) / 2 = 7.5
>>> dcg_score(true_relevance, scores, k=1)
7.5
>>> # we can choose to ignore ties for faster results, but only
>>> # if we know there aren't ties in our scores, otherwise we get
>>> # wrong results:
>>> dcg_score(true_relevance,
... scores, k=1, ignore_ties=True)
5.0
```
scikit_learn sklearn.datasets.fetch_20newsgroups_vectorized sklearn.datasets.fetch\_20newsgroups\_vectorized
================================================
sklearn.datasets.fetch\_20newsgroups\_vectorized(*\**, *subset='train'*, *remove=()*, *data\_home=None*, *download\_if\_missing=True*, *return\_X\_y=False*, *normalize=True*, *as\_frame=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/datasets/_twenty_newsgroups.py#L339)
Load and vectorize the 20 newsgroups dataset (classification).
Download it if necessary.
This is a convenience function; the transformation is done using the default settings for [`CountVectorizer`](sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer "sklearn.feature_extraction.text.CountVectorizer"). For more advanced usage (stopword filtering, n-gram extraction, etc.), combine fetch\_20newsgroups with a custom [`CountVectorizer`](sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer "sklearn.feature_extraction.text.CountVectorizer"), [`HashingVectorizer`](sklearn.feature_extraction.text.hashingvectorizer#sklearn.feature_extraction.text.HashingVectorizer "sklearn.feature_extraction.text.HashingVectorizer"), [`TfidfTransformer`](sklearn.feature_extraction.text.tfidftransformer#sklearn.feature_extraction.text.TfidfTransformer "sklearn.feature_extraction.text.TfidfTransformer") or [`TfidfVectorizer`](sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer "sklearn.feature_extraction.text.TfidfVectorizer").
The resulting counts are normalized using [`sklearn.preprocessing.normalize`](sklearn.preprocessing.normalize#sklearn.preprocessing.normalize "sklearn.preprocessing.normalize") unless normalize is set to False.
| | |
| --- | --- |
| Classes | 20 |
| Samples total | 18846 |
| Dimensionality | 130107 |
| Features | real |
Read more in the [User Guide](https://scikit-learn.org/1.1/datasets/real_world.html#newsgroups-dataset).
Parameters:
**subset**{‘train’, ‘test’, ‘all’}, default=’train’
Select the dataset to load: ‘train’ for the training set, ‘test’ for the test set, ‘all’ for both, with shuffled ordering.
**remove**tuple, default=()
May contain any subset of (‘headers’, ‘footers’, ‘quotes’). Each of these are kinds of text that will be detected and removed from the newsgroup posts, preventing classifiers from overfitting on metadata.
‘headers’ removes newsgroup headers, ‘footers’ removes blocks at the ends of posts that look like signatures, and ‘quotes’ removes lines that appear to be quoting another post.
**data\_home**str, default=None
Specify an download and cache folder for the datasets. If None, all scikit-learn data is stored in ‘~/scikit\_learn\_data’ subfolders.
**download\_if\_missing**bool, default=True
If False, raise an IOError if the data is not locally available instead of trying to download the data from the source site.
**return\_X\_y**bool, default=False
If True, returns `(data.data, data.target)` instead of a Bunch object.
New in version 0.20.
**normalize**bool, default=True
If True, normalizes each document’s feature vector to unit norm using [`sklearn.preprocessing.normalize`](sklearn.preprocessing.normalize#sklearn.preprocessing.normalize "sklearn.preprocessing.normalize").
New in version 0.22.
**as\_frame**bool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric, string, or categorical). The target is a pandas DataFrame or Series depending on the number of `target_columns`.
New in version 0.24.
Returns:
**bunch**[`Bunch`](sklearn.utils.bunch#sklearn.utils.Bunch "sklearn.utils.Bunch")
Dictionary-like object, with the following attributes.
data: {sparse matrix, dataframe} of shape (n\_samples, n\_features)
The input data matrix. If `as_frame` is `True`, `data` is a pandas DataFrame with sparse columns.
target: {ndarray, series} of shape (n\_samples,)
The target labels. If `as_frame` is `True`, `target` is a pandas Series.
target\_names: list of shape (n\_classes,)
The names of target classes.
DESCR: str
The full description of the dataset.
frame: dataframe of shape (n\_samples, n\_features + 1)
Only present when `as_frame=True`. Pandas DataFrame with `data` and `target`.
New in version 0.24.
**(data, target)**tuple if `return_X_y` is True
`data` and `target` would be of the format defined in the `Bunch` description above.
New in version 0.20.
Examples using `sklearn.datasets.fetch_20newsgroups_vectorized`
---------------------------------------------------------------
[Model Complexity Influence](../../auto_examples/applications/plot_model_complexity_influence#sphx-glr-auto-examples-applications-plot-model-complexity-influence-py)
[Multiclass sparse logistic regression on 20newgroups](../../auto_examples/linear_model/plot_sparse_logistic_regression_20newsgroups#sphx-glr-auto-examples-linear-model-plot-sparse-logistic-regression-20newsgroups-py)
[The Johnson-Lindenstrauss bound for embedding with random projections](../../auto_examples/miscellaneous/plot_johnson_lindenstrauss_bound#sphx-glr-auto-examples-miscellaneous-plot-johnson-lindenstrauss-bound-py)
scikit_learn sklearn.dummy.DummyClassifier sklearn.dummy.DummyClassifier
=============================
*class*sklearn.dummy.DummyClassifier(*\**, *strategy='prior'*, *random\_state=None*, *constant=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/dummy.py#L23)
DummyClassifier makes predictions that ignore the input features.
This classifier serves as a simple baseline to compare against other more complex classifiers.
The specific behavior of the baseline is selected with the `strategy` parameter.
All strategies make predictions that ignore the input feature values passed as the `X` argument to `fit` and `predict`. The predictions, however, typically depend on values observed in the `y` parameter passed to `fit`.
Note that the “stratified” and “uniform” strategies lead to non-deterministic predictions that can be rendered deterministic by setting the `random_state` parameter if needed. The other strategies are naturally deterministic and, once fit, always return a the same constant prediction for any value of `X`.
Read more in the [User Guide](../model_evaluation#dummy-estimators).
New in version 0.13.
Parameters:
**strategy**{“most\_frequent”, “prior”, “stratified”, “uniform”, “constant”}, default=”prior”
Strategy to use to generate predictions.
* “most\_frequent”: the `predict` method always returns the most frequent class label in the observed `y` argument passed to `fit`. The `predict_proba` method returns the matching one-hot encoded vector.
* “prior”: the `predict` method always returns the most frequent class label in the observed `y` argument passed to `fit` (like “most\_frequent”). `predict_proba` always returns the empirical class distribution of `y` also known as the empirical class prior distribution.
* “stratified”: the `predict_proba` method randomly samples one-hot vectors from a multinomial distribution parametrized by the empirical class prior probabilities. The `predict` method returns the class label which got probability one in the one-hot vector of `predict_proba`. Each sampled row of both methods is therefore independent and identically distributed.
* “uniform”: generates predictions uniformly at random from the list of unique classes observed in `y`, i.e. each class has equal probability.
* “constant”: always predicts a constant label that is provided by the user. This is useful for metrics that evaluate a non-majority class.
Changed in version 0.24: The default value of `strategy` has changed to “prior” in version 0.24.
**random\_state**int, RandomState instance or None, default=None
Controls the randomness to generate the predictions when `strategy='stratified'` or `strategy='uniform'`. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**constant**int or str or array-like of shape (n\_outputs,), default=None
The explicit constant as predicted by the “constant” strategy. This parameter is useful only for the “constant” strategy.
Attributes:
**classes\_**ndarray of shape (n\_classes,) or list of such arrays
Unique class labels observed in `y`. For multi-output classification problems, this attribute is a list of arrays as each output has an independent set of possible classes.
**n\_classes\_**int or list of int
Number of label for each output.
**class\_prior\_**ndarray of shape (n\_classes,) or list of such arrays
Frequency of each class observed in `y`. For multioutput classification problems, this is computed independently for each output.
**n\_outputs\_**int
Number of outputs.
[`n_features_in_`](#sklearn.dummy.DummyClassifier.n_features_in_ "sklearn.dummy.DummyClassifier.n_features_in_")`None`
DEPRECATED: `n_features_in_` is deprecated in 1.0 and will be removed in 1.2.
**sparse\_output\_**bool
True if the array returned from predict is to be in sparse CSC format. Is automatically set to True if the input `y` is passed in sparse format.
See also
[`DummyRegressor`](sklearn.dummy.dummyregressor#sklearn.dummy.DummyRegressor "sklearn.dummy.DummyRegressor")
Regressor that makes predictions using simple rules.
#### Examples
```
>>> import numpy as np
>>> from sklearn.dummy import DummyClassifier
>>> X = np.array([-1, 1, 1, 1])
>>> y = np.array([0, 1, 1, 1])
>>> dummy_clf = DummyClassifier(strategy="most_frequent")
>>> dummy_clf.fit(X, y)
DummyClassifier(strategy='most_frequent')
>>> dummy_clf.predict(X)
array([1, 1, 1, 1])
>>> dummy_clf.score(X, y)
0.75
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.dummy.DummyClassifier.fit "sklearn.dummy.DummyClassifier.fit")(X, y[, sample\_weight]) | Fit the baseline classifier. |
| [`get_params`](#sklearn.dummy.DummyClassifier.get_params "sklearn.dummy.DummyClassifier.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.dummy.DummyClassifier.predict "sklearn.dummy.DummyClassifier.predict")(X) | Perform classification on test vectors X. |
| [`predict_log_proba`](#sklearn.dummy.DummyClassifier.predict_log_proba "sklearn.dummy.DummyClassifier.predict_log_proba")(X) | Return log probability estimates for the test vectors X. |
| [`predict_proba`](#sklearn.dummy.DummyClassifier.predict_proba "sklearn.dummy.DummyClassifier.predict_proba")(X) | Return probability estimates for the test vectors X. |
| [`score`](#sklearn.dummy.DummyClassifier.score "sklearn.dummy.DummyClassifier.score")(X, y[, sample\_weight]) | Return the mean accuracy on the given test data and labels. |
| [`set_params`](#sklearn.dummy.DummyClassifier.set_params "sklearn.dummy.DummyClassifier.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/dummy.py#L142)
Fit the baseline classifier.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training data.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
Target values.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**self**object
Returns the instance itself.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
*property*n\_features\_in\_
DEPRECATED: `n_features_in_` is deprecated in 1.0 and will be removed in 1.2.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/dummy.py#L242)
Perform classification on test vectors X.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test data.
Returns:
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
Predicted target values for X.
predict\_log\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/dummy.py#L392)
Return log probability estimates for the test vectors X.
Parameters:
**X**{array-like, object with finite length or shape}
Training data.
Returns:
**P**ndarray of shape (n\_samples, n\_classes) or list of such arrays
Returns the log probability of the sample for each class in the model, where classes are ordered arithmetically for each output.
predict\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/dummy.py#L329)
Return probability estimates for the test vectors X.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test data.
Returns:
**P**ndarray of shape (n\_samples, n\_classes) or list of such arrays
Returns the probability of the sample for each class in the model, where classes are ordered arithmetically, for each output.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/dummy.py#L424)
Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Parameters:
**X**None or array-like of shape (n\_samples, n\_features)
Test samples. Passing None as test samples gives the same result as passing real test samples, since DummyClassifier operates independently of the sampled observations.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True labels for X.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
Mean accuracy of self.predict(X) wrt. y.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
scikit_learn sklearn.metrics.hamming_loss sklearn.metrics.hamming\_loss
=============================
sklearn.metrics.hamming\_loss(*y\_true*, *y\_pred*, *\**, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/_classification.py#L2237)
Compute the average Hamming loss.
The Hamming loss is the fraction of labels that are incorrectly predicted.
Read more in the [User Guide](../model_evaluation#hamming-loss).
Parameters:
**y\_true**1d array-like, or label indicator array / sparse matrix
Ground truth (correct) labels.
**y\_pred**1d array-like, or label indicator array / sparse matrix
Predicted labels, as returned by a classifier.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
New in version 0.18.
Returns:
**loss**float or int
Return the average Hamming loss between element of `y_true` and `y_pred`.
See also
[`accuracy_score`](sklearn.metrics.accuracy_score#sklearn.metrics.accuracy_score "sklearn.metrics.accuracy_score")
Compute the accuracy score. By default, the function will return the fraction of correct predictions divided by the total number of predictions.
[`jaccard_score`](sklearn.metrics.jaccard_score#sklearn.metrics.jaccard_score "sklearn.metrics.jaccard_score")
Compute the Jaccard similarity coefficient score.
[`zero_one_loss`](sklearn.metrics.zero_one_loss#sklearn.metrics.zero_one_loss "sklearn.metrics.zero_one_loss")
Compute the Zero-one classification loss. By default, the function will return the percentage of imperfectly predicted subsets.
#### Notes
In multiclass classification, the Hamming loss corresponds to the Hamming distance between `y_true` and `y_pred` which is equivalent to the subset `zero_one_loss` function, when `normalize` parameter is set to True.
In multilabel classification, the Hamming loss is different from the subset zero-one loss. The zero-one loss considers the entire set of labels for a given sample incorrect if it does not entirely match the true set of labels. Hamming loss is more forgiving in that it penalizes only the individual labels.
The Hamming loss is upperbounded by the subset zero-one loss, when `normalize` parameter is set to True. It is always between 0 and 1, lower being better.
#### References
[1] Grigorios Tsoumakas, Ioannis Katakis. Multi-Label Classification: An Overview. International Journal of Data Warehousing & Mining, 3(3), 1-13, July-September 2007.
[2] [Wikipedia entry on the Hamming distance](https://en.wikipedia.org/wiki/Hamming_distance).
#### Examples
```
>>> from sklearn.metrics import hamming_loss
>>> y_pred = [1, 2, 3, 4]
>>> y_true = [2, 2, 3, 4]
>>> hamming_loss(y_true, y_pred)
0.25
```
In the multilabel case with binary label indicators:
```
>>> import numpy as np
>>> hamming_loss(np.array([[0, 1], [1, 1]]), np.zeros((2, 2)))
0.75
```
Examples using `sklearn.metrics.hamming_loss`
---------------------------------------------
[Model Complexity Influence](../../auto_examples/applications/plot_model_complexity_influence#sphx-glr-auto-examples-applications-plot-model-complexity-influence-py)
| programming_docs |
scikit_learn sklearn.preprocessing.scale sklearn.preprocessing.scale
===========================
sklearn.preprocessing.scale(*X*, *\**, *axis=0*, *with\_mean=True*, *with\_std=True*, *copy=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_data.py#L121)
Standardize a dataset along any axis.
Center to the mean and component wise scale to unit variance.
Read more in the [User Guide](../preprocessing#preprocessing-scaler).
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The data to center and scale.
**axis**int, default=0
axis used to compute the means and standard deviations along. If 0, independently standardize each feature, otherwise (if 1) standardize each sample.
**with\_mean**bool, default=True
If True, center the data before scaling.
**with\_std**bool, default=True
If True, scale the data to unit variance (or equivalently, unit standard deviation).
**copy**bool, default=True
set to False to perform inplace row normalization and avoid a copy (if the input is already a numpy array or a scipy.sparse CSC matrix and if axis is 1).
Returns:
**X\_tr**{ndarray, sparse matrix} of shape (n\_samples, n\_features)
The transformed data.
See also
[`StandardScaler`](sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler")
Performs scaling to unit variance using the Transformer API (e.g. as part of a preprocessing [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")).
#### Notes
This implementation will refuse to center scipy.sparse matrices since it would make them non-sparse and would potentially crash the program with memory exhaustion problems.
Instead the caller is expected to either set explicitly `with_mean=False` (in that case, only variance scaling will be performed on the features of the CSC matrix) or to call `X.toarray()` if he/she expects the materialized dense array to fit in memory.
To avoid memory copy the caller should pass a CSC matrix.
NaNs are treated as missing values: disregarded to compute the statistics, and maintained during the data transformation.
We use a biased estimator for the standard deviation, equivalent to `numpy.std(x, ddof=0)`. Note that the choice of `ddof` is unlikely to affect model performance.
For a comparison of the different scalers, transformers, and normalizers, see [examples/preprocessing/plot\_all\_scaling.py](../../auto_examples/preprocessing/plot_all_scaling#sphx-glr-auto-examples-preprocessing-plot-all-scaling-py).
Warning
Risk of data leak
Do not use [`scale`](#sklearn.preprocessing.scale "sklearn.preprocessing.scale") unless you know what you are doing. A common mistake is to apply it to the entire data *before* splitting into training and test sets. This will bias the model evaluation because information would have leaked from the test set to the training set. In general, we recommend using [`StandardScaler`](sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler") within a [Pipeline](../compose#pipeline) in order to prevent most risks of data leaking: `pipe = make_pipeline(StandardScaler(), LogisticRegression())`.
scikit_learn sklearn.utils.validation.check_is_fitted sklearn.utils.validation.check\_is\_fitted
==========================================
sklearn.utils.validation.check\_is\_fitted(*estimator*, *attributes=None*, *\**, *msg=None*, *all\_or\_any=<built-in function all>*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/utils/validation.py#L1276)
Perform is\_fitted validation for estimator.
Checks if the estimator is fitted by verifying the presence of fitted attributes (ending with a trailing underscore) and otherwise raises a NotFittedError with the given message.
If an estimator does not set any attributes with a trailing underscore, it can define a `__sklearn_is_fitted__` method returning a boolean to specify if the estimator is fitted or not.
Parameters:
**estimator**estimator instance
estimator instance for which the check is performed.
**attributes**str, list or tuple of str, default=None
Attribute name(s) given as string or a list/tuple of strings Eg.: `["coef_", "estimator_", ...], "coef_"`
If `None`, `estimator` is considered fitted if there exist an attribute that ends with a underscore and does not start with double underscore.
**msg**str, default=None
The default error message is, “This %(name)s instance is not fitted yet. Call ‘fit’ with appropriate arguments before using this estimator.”
For custom messages if “%(name)s” is present in the message string, it is substituted for the estimator name.
Eg. : “Estimator, %(name)s, must be fitted before sparsifying”.
**all\_or\_any**callable, {all, any}, default=all
Specify whether all or any of the given attributes must exist.
Returns:
None
Raises:
NotFittedError
If the attributes are not found.
Examples using `sklearn.utils.validation.check_is_fitted`
---------------------------------------------------------
[Inductive Clustering](../../auto_examples/cluster/plot_inductive_clustering#sphx-glr-auto-examples-cluster-plot-inductive-clustering-py)
scikit_learn sklearn.utils.extmath.weighted_mode sklearn.utils.extmath.weighted\_mode
====================================
sklearn.utils.extmath.weighted\_mode(*a*, *w*, *\**, *axis=0*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/utils/extmath.py#L584)
Returns an array of the weighted modal (most common) value in a.
If there is more than one such value, only the first is returned. The bin-count for the modal bins is also returned.
This is an extension of the algorithm in scipy.stats.mode.
Parameters:
**a**array-like
n-dimensional array of which to find mode(s).
**w**array-like
n-dimensional array of weights for each value.
**axis**int, default=0
Axis along which to operate. Default is 0, i.e. the first axis.
Returns:
**vals**ndarray
Array of modal values.
**score**ndarray
Array of weighted counts for each mode.
See also
[`scipy.stats.mode`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.mode.html#scipy.stats.mode "(in SciPy v1.9.3)")
#### Examples
```
>>> from sklearn.utils.extmath import weighted_mode
>>> x = [4, 1, 4, 2, 4, 2]
>>> weights = [1, 1, 1, 1, 1, 1]
>>> weighted_mode(x, weights)
(array([4.]), array([3.]))
```
The value 4 appears three times: with uniform weights, the result is simply the mode of the distribution.
```
>>> weights = [1, 3, 0.5, 1.5, 1, 2] # deweight the 4's
>>> weighted_mode(x, weights)
(array([2.]), array([3.5]))
```
The value 2 has the highest score: it appears twice with weights of 1.5 and 2: the sum of these is 3.5.
scikit_learn sklearn.preprocessing.KernelCenterer sklearn.preprocessing.KernelCenterer
====================================
*class*sklearn.preprocessing.KernelCenterer[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_data.py#L2126)
Center an arbitrary kernel matrix \(K\).
Let define a kernel \(K\) such that:
\[K(X, Y) = \phi(X) . \phi(Y)^{T}\] \(\phi(X)\) is a function mapping of rows of \(X\) to a Hilbert space and \(K\) is of shape `(n_samples, n_samples)`.
This class allows to compute \(\tilde{K}(X, Y)\) such that:
\[\tilde{K(X, Y)} = \tilde{\phi}(X) . \tilde{\phi}(Y)^{T}\] \(\tilde{\phi}(X)\) is the centered mapped data in the Hilbert space.
`KernelCenterer` centers the features without explicitly computing the mapping \(\phi(\cdot)\). Working with centered kernels is sometime expected when dealing with algebra computation such as eigendecomposition for [`KernelPCA`](sklearn.decomposition.kernelpca#sklearn.decomposition.KernelPCA "sklearn.decomposition.KernelPCA") for instance.
Read more in the [User Guide](../preprocessing#kernel-centering).
Attributes:
**K\_fit\_rows\_**ndarray of shape (n\_samples,)
Average of each column of kernel matrix.
**K\_fit\_all\_**float
Average of kernel matrix.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`sklearn.kernel_approximation.Nystroem`](sklearn.kernel_approximation.nystroem#sklearn.kernel_approximation.Nystroem "sklearn.kernel_approximation.Nystroem")
Approximate a kernel map using a subset of the training data.
#### References
[1] [Schölkopf, Bernhard, Alexander Smola, and Klaus-Robert Müller. “Nonlinear component analysis as a kernel eigenvalue problem.” Neural computation 10.5 (1998): 1299-1319.](https://www.mlpack.org/papers/kpca.pdf)
#### Examples
```
>>> from sklearn.preprocessing import KernelCenterer
>>> from sklearn.metrics.pairwise import pairwise_kernels
>>> X = [[ 1., -2., 2.],
... [ -2., 1., 3.],
... [ 4., 1., -2.]]
>>> K = pairwise_kernels(X, metric='linear')
>>> K
array([[ 9., 2., -2.],
[ 2., 14., -13.],
[ -2., -13., 21.]])
>>> transformer = KernelCenterer().fit(K)
>>> transformer
KernelCenterer()
>>> transformer.transform(K)
array([[ 5., 0., -5.],
[ 0., 14., -14.],
[ -5., -14., 19.]])
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.preprocessing.KernelCenterer.fit "sklearn.preprocessing.KernelCenterer.fit")(K[, y]) | Fit KernelCenterer. |
| [`fit_transform`](#sklearn.preprocessing.KernelCenterer.fit_transform "sklearn.preprocessing.KernelCenterer.fit_transform")(X[, y]) | Fit to data, then transform it. |
| [`get_feature_names_out`](#sklearn.preprocessing.KernelCenterer.get_feature_names_out "sklearn.preprocessing.KernelCenterer.get_feature_names_out")([input\_features]) | Get output feature names for transformation. |
| [`get_params`](#sklearn.preprocessing.KernelCenterer.get_params "sklearn.preprocessing.KernelCenterer.get_params")([deep]) | Get parameters for this estimator. |
| [`set_params`](#sklearn.preprocessing.KernelCenterer.set_params "sklearn.preprocessing.KernelCenterer.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.preprocessing.KernelCenterer.transform "sklearn.preprocessing.KernelCenterer.transform")(K[, copy]) | Center kernel matrix. |
fit(*K*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_data.py#L2208)
Fit KernelCenterer.
Parameters:
**K**ndarray of shape (n\_samples, n\_samples)
Kernel matrix.
**y**None
Ignored.
Returns:
**self**object
Returns the instance itself.
fit\_transform(*X*, *y=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L839)
Fit to data, then transform it.
Fits transformer to `X` and `y` with optional parameters `fit_params` and returns a transformed version of `X`.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs), default=None
Target values (None for unsupervised transformations).
**\*\*fit\_params**dict
Additional fit parameters.
Returns:
**X\_new**ndarray array of shape (n\_samples, n\_features\_new)
Transformed array.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L909)
Get output feature names for transformation.
Parameters:
**input\_features**array-like of str or None, default=None
Only used to validate feature names with the names seen in [`fit`](#sklearn.preprocessing.KernelCenterer.fit "sklearn.preprocessing.KernelCenterer.fit").
Returns:
**feature\_names\_out**ndarray of str objects
Transformed feature names.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*K*, *copy=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_data.py#L2237)
Center kernel matrix.
Parameters:
**K**ndarray of shape (n\_samples1, n\_samples2)
Kernel matrix.
**copy**bool, default=True
Set to False to perform inplace computation.
Returns:
**K\_new**ndarray of shape (n\_samples1, n\_samples2)
Returns the instance itself.
scikit_learn sklearn.preprocessing.OrdinalEncoder sklearn.preprocessing.OrdinalEncoder
====================================
*class*sklearn.preprocessing.OrdinalEncoder(*\**, *categories='auto'*, *dtype=<class 'numpy.float64'>*, *handle\_unknown='error'*, *unknown\_value=None*, *encoded\_missing\_value=nan*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_encoders.py#L1117)
Encode categorical features as an integer array.
The input to this transformer should be an array-like of integers or strings, denoting the values taken on by categorical (discrete) features. The features are converted to ordinal integers. This results in a single column of integers (0 to n\_categories - 1) per feature.
Read more in the [User Guide](../preprocessing#preprocessing-categorical-features).
New in version 0.20.
Parameters:
**categories**‘auto’ or a list of array-like, default=’auto’
Categories (unique values) per feature:
* ‘auto’ : Determine categories automatically from the training data.
* list : `categories[i]` holds the categories expected in the ith column. The passed categories should not mix strings and numeric values, and should be sorted in case of numeric values.
The used categories can be found in the `categories_` attribute.
**dtype**number type, default np.float64
Desired dtype of output.
**handle\_unknown**{‘error’, ‘use\_encoded\_value’}, default=’error’
When set to ‘error’ an error will be raised in case an unknown categorical feature is present during transform. When set to ‘use\_encoded\_value’, the encoded value of unknown categories will be set to the value given for the parameter `unknown_value`. In [`inverse_transform`](#sklearn.preprocessing.OrdinalEncoder.inverse_transform "sklearn.preprocessing.OrdinalEncoder.inverse_transform"), an unknown category will be denoted as None.
New in version 0.24.
**unknown\_value**int or np.nan, default=None
When the parameter handle\_unknown is set to ‘use\_encoded\_value’, this parameter is required and will set the encoded value of unknown categories. It has to be distinct from the values used to encode any of the categories in `fit`. If set to np.nan, the `dtype` parameter must be a float dtype.
New in version 0.24.
**encoded\_missing\_value**int or np.nan, default=np.nan
Encoded value of missing categories. If set to `np.nan`, then the `dtype` parameter must be a float dtype.
New in version 1.1.
Attributes:
**categories\_**list of arrays
The categories of each feature determined during `fit` (in order of the features in X and corresponding with the output of `transform`). This does not include categories that weren’t seen during `fit`.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 1.0.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`OneHotEncoder`](sklearn.preprocessing.onehotencoder#sklearn.preprocessing.OneHotEncoder "sklearn.preprocessing.OneHotEncoder")
Performs a one-hot encoding of categorical features.
[`LabelEncoder`](sklearn.preprocessing.labelencoder#sklearn.preprocessing.LabelEncoder "sklearn.preprocessing.LabelEncoder")
Encodes target labels with values between 0 and `n_classes-1`.
#### Examples
Given a dataset with two features, we let the encoder find the unique values per feature and transform the data to an ordinal encoding.
```
>>> from sklearn.preprocessing import OrdinalEncoder
>>> enc = OrdinalEncoder()
>>> X = [['Male', 1], ['Female', 3], ['Female', 2]]
>>> enc.fit(X)
OrdinalEncoder()
>>> enc.categories_
[array(['Female', 'Male'], dtype=object), array([1, 2, 3], dtype=object)]
>>> enc.transform([['Female', 3], ['Male', 1]])
array([[0., 2.],
[1., 0.]])
```
```
>>> enc.inverse_transform([[1, 0], [0, 1]])
array([['Male', 1],
['Female', 2]], dtype=object)
```
By default, [`OrdinalEncoder`](#sklearn.preprocessing.OrdinalEncoder "sklearn.preprocessing.OrdinalEncoder") is lenient towards missing values by propagating them.
```
>>> import numpy as np
>>> X = [['Male', 1], ['Female', 3], ['Female', np.nan]]
>>> enc.fit_transform(X)
array([[ 1., 0.],
[ 0., 1.],
[ 0., nan]])
```
You can use the parameter `encoded_missing_value` to encode missing values.
```
>>> enc.set_params(encoded_missing_value=-1).fit_transform(X)
array([[ 1., 0.],
[ 0., 1.],
[ 0., -1.]])
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.preprocessing.OrdinalEncoder.fit "sklearn.preprocessing.OrdinalEncoder.fit")(X[, y]) | Fit the OrdinalEncoder to X. |
| [`fit_transform`](#sklearn.preprocessing.OrdinalEncoder.fit_transform "sklearn.preprocessing.OrdinalEncoder.fit_transform")(X[, y]) | Fit to data, then transform it. |
| [`get_feature_names_out`](#sklearn.preprocessing.OrdinalEncoder.get_feature_names_out "sklearn.preprocessing.OrdinalEncoder.get_feature_names_out")([input\_features]) | Get output feature names for transformation. |
| [`get_params`](#sklearn.preprocessing.OrdinalEncoder.get_params "sklearn.preprocessing.OrdinalEncoder.get_params")([deep]) | Get parameters for this estimator. |
| [`inverse_transform`](#sklearn.preprocessing.OrdinalEncoder.inverse_transform "sklearn.preprocessing.OrdinalEncoder.inverse_transform")(X) | Convert the data back to the original representation. |
| [`set_params`](#sklearn.preprocessing.OrdinalEncoder.set_params "sklearn.preprocessing.OrdinalEncoder.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.preprocessing.OrdinalEncoder.transform "sklearn.preprocessing.OrdinalEncoder.transform")(X) | Transform X to ordinal codes. |
fit(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_encoders.py#L1246)
Fit the OrdinalEncoder to X.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The data to determine the categories of each feature.
**y**None
Ignored. This parameter exists only for compatibility with [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline").
Returns:
**self**object
Fitted encoder.
fit\_transform(*X*, *y=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L839)
Fit to data, then transform it.
Fits transformer to `X` and `y` with optional parameters `fit_params` and returns a transformed version of `X`.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs), default=None
Target values (None for unsupervised transformations).
**\*\*fit\_params**dict
Additional fit parameters.
Returns:
**X\_new**ndarray array of shape (n\_samples, n\_features\_new)
Transformed array.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L880)
Get output feature names for transformation.
Parameters:
**input\_features**array-like of str or None, default=None
Input features.
* If `input_features` is `None`, then `feature_names_in_` is used as feature names in. If `feature_names_in_` is not defined, then the following input feature names are generated: `["x0", "x1", ..., "x(n_features_in_ - 1)"]`.
* If `input_features` is an array-like, then `input_features` must match `feature_names_in_` if `feature_names_in_` is defined.
Returns:
**feature\_names\_out**ndarray of str objects
Same as input features.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
inverse\_transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_encoders.py#L1377)
Convert the data back to the original representation.
Parameters:
**X**array-like of shape (n\_samples, n\_encoded\_features)
The transformed data.
Returns:
**X\_tr**ndarray of shape (n\_samples, n\_features)
Inverse transformed array.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_encoders.py#L1349)
Transform X to ordinal codes.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The data to encode.
Returns:
**X\_out**ndarray of shape (n\_samples, n\_features)
Transformed input.
Examples using `sklearn.preprocessing.OrdinalEncoder`
-----------------------------------------------------
[Categorical Feature Support in Gradient Boosting](../../auto_examples/ensemble/plot_gradient_boosting_categorical#sphx-glr-auto-examples-ensemble-plot-gradient-boosting-categorical-py)
[Combine predictors using stacking](../../auto_examples/ensemble/plot_stack_predictors#sphx-glr-auto-examples-ensemble-plot-stack-predictors-py)
[Time-related feature engineering](../../auto_examples/applications/plot_cyclical_feature_engineering#sphx-glr-auto-examples-applications-plot-cyclical-feature-engineering-py)
[Poisson regression and non-normal loss](../../auto_examples/linear_model/plot_poisson_regression_non_normal_loss#sphx-glr-auto-examples-linear-model-plot-poisson-regression-non-normal-loss-py)
[Permutation Importance vs Random Forest Feature Importance (MDI)](../../auto_examples/inspection/plot_permutation_importance#sphx-glr-auto-examples-inspection-plot-permutation-importance-py)
| programming_docs |
scikit_learn sklearn.metrics.pairwise.additive_chi2_kernel sklearn.metrics.pairwise.additive\_chi2\_kernel
===============================================
sklearn.metrics.pairwise.additive\_chi2\_kernel(*X*, *Y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/pairwise.py#L1390)
Compute the additive chi-squared kernel between observations in X and Y.
The chi-squared kernel is computed between each pair of rows in X and Y. X and Y have to be non-negative. This kernel is most commonly applied to histograms.
The chi-squared kernel is given by:
```
k(x, y) = -Sum [(x - y)^2 / (x + y)]
```
It can be interpreted as a weighted difference per entry.
Read more in the [User Guide](../metrics#chi2-kernel).
Parameters:
**X**array-like of shape (n\_samples\_X, n\_features)
A feature array.
**Y**ndarray of shape (n\_samples\_Y, n\_features), default=None
An optional second feature array. If `None`, uses `Y=X`.
Returns:
**kernel\_matrix**ndarray of shape (n\_samples\_X, n\_samples\_Y)
The kernel matrix.
See also
[`chi2_kernel`](sklearn.metrics.pairwise.chi2_kernel#sklearn.metrics.pairwise.chi2_kernel "sklearn.metrics.pairwise.chi2_kernel")
The exponentiated version of the kernel, which is usually preferable.
[`sklearn.kernel_approximation.AdditiveChi2Sampler`](sklearn.kernel_approximation.additivechi2sampler#sklearn.kernel_approximation.AdditiveChi2Sampler "sklearn.kernel_approximation.AdditiveChi2Sampler")
A Fourier approximation to this kernel.
#### Notes
As the negative of a distance, this kernel is only conditionally positive definite.
#### References
* Zhang, J. and Marszalek, M. and Lazebnik, S. and Schmid, C. Local features and kernels for classification of texture and object categories: A comprehensive study International Journal of Computer Vision 2007 <https://hal.archives-ouvertes.fr/hal-00171412/document>
scikit_learn sklearn.model_selection.GridSearchCV sklearn.model\_selection.GridSearchCV
=====================================
*class*sklearn.model\_selection.GridSearchCV(*estimator*, *param\_grid*, *\**, *scoring=None*, *n\_jobs=None*, *refit=True*, *cv=None*, *verbose=0*, *pre\_dispatch='2\*n\_jobs'*, *error\_score=nan*, *return\_train\_score=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L1021)
Exhaustive search over specified parameter values for an estimator.
Important members are fit, predict.
GridSearchCV implements a “fit” and a “score” method. It also implements “score\_samples”, “predict”, “predict\_proba”, “decision\_function”, “transform” and “inverse\_transform” if they are implemented in the estimator used.
The parameters of the estimator used to apply these methods are optimized by cross-validated grid-search over a parameter grid.
Read more in the [User Guide](../grid_search#grid-search).
Parameters:
**estimator**estimator object
This is assumed to implement the scikit-learn estimator interface. Either estimator needs to provide a `score` function, or `scoring` must be passed.
**param\_grid**dict or list of dictionaries
Dictionary with parameters names (`str`) as keys and lists of parameter settings to try as values, or a list of such dictionaries, in which case the grids spanned by each dictionary in the list are explored. This enables searching over any sequence of parameter settings.
**scoring**str, callable, list, tuple or dict, default=None
Strategy to evaluate the performance of the cross-validated model on the test set.
If `scoring` represents a single score, one can use:
* a single string (see [The scoring parameter: defining model evaluation rules](../model_evaluation#scoring-parameter));
* a callable (see [Defining your scoring strategy from metric functions](../model_evaluation#scoring)) that returns a single value.
If `scoring` represents multiple scores, one can use:
* a list or tuple of unique strings;
* a callable returning a dictionary where the keys are the metric names and the values are the metric scores;
* a dictionary with metric names as keys and callables a values.
See [Specifying multiple metrics for evaluation](../grid_search#multimetric-grid-search) for an example.
**n\_jobs**int, default=None
Number of jobs to run in parallel. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
Changed in version v0.20: `n_jobs` default changed from 1 to None
**refit**bool, str, or callable, default=True
Refit an estimator using the best found parameters on the whole dataset.
For multiple metric evaluation, this needs to be a `str` denoting the scorer that would be used to find the best parameters for refitting the estimator at the end.
Where there are considerations other than maximum score in choosing a best estimator, `refit` can be set to a function which returns the selected `best_index_` given `cv_results_`. In that case, the `best_estimator_` and `best_params_` will be set according to the returned `best_index_` while the `best_score_` attribute will not be available.
The refitted estimator is made available at the `best_estimator_` attribute and permits using `predict` directly on this `GridSearchCV` instance.
Also for multiple metric evaluation, the attributes `best_index_`, `best_score_` and `best_params_` will only be available if `refit` is set and all of them will be determined w.r.t this specific scorer.
See `scoring` parameter to know more about multiple metric evaluation.
See [Custom refit strategy of a grid search with cross-validation](../../auto_examples/model_selection/plot_grid_search_digits#sphx-glr-auto-examples-model-selection-plot-grid-search-digits-py) to see how to design a custom selection strategy using a callable via `refit`.
Changed in version 0.20: Support for callable added.
**cv**int, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are:
* None, to use the default 5-fold cross validation,
* integer, to specify the number of folds in a `(Stratified)KFold`,
* [CV splitter](https://scikit-learn.org/1.1/glossary.html#term-CV-splitter),
* An iterable yielding (train, test) splits as arrays of indices.
For integer/None inputs, if the estimator is a classifier and `y` is either binary or multiclass, [`StratifiedKFold`](sklearn.model_selection.stratifiedkfold#sklearn.model_selection.StratifiedKFold "sklearn.model_selection.StratifiedKFold") is used. In all other cases, [`KFold`](sklearn.model_selection.kfold#sklearn.model_selection.KFold "sklearn.model_selection.KFold") is used. These splitters are instantiated with `shuffle=False` so the splits will be the same across calls.
Refer [User Guide](../cross_validation#cross-validation) for the various cross-validation strategies that can be used here.
Changed in version 0.22: `cv` default value if None changed from 3-fold to 5-fold.
**verbose**int
Controls the verbosity: the higher, the more messages.
* >1 : the computation time for each fold and parameter candidate is displayed;
* >2 : the score is also displayed;
* >3 : the fold and candidate parameter indexes are also displayed together with the starting time of the computation.
**pre\_dispatch**int, or str, default=’2\*n\_jobs’
Controls the number of jobs that get dispatched during parallel execution. Reducing this number can be useful to avoid an explosion of memory consumption when more jobs get dispatched than CPUs can process. This parameter can be:
* None, in which case all the jobs are immediately created and spawned. Use this for lightweight and fast-running jobs, to avoid delays due to on-demand spawning of the jobs
* An int, giving the exact number of total jobs that are spawned
* A str, giving an expression as a function of n\_jobs, as in ‘2\*n\_jobs’
**error\_score**‘raise’ or numeric, default=np.nan
Value to assign to the score if an error occurs in estimator fitting. If set to ‘raise’, the error is raised. If a numeric value is given, FitFailedWarning is raised. This parameter does not affect the refit step, which will always raise the error.
**return\_train\_score**bool, default=False
If `False`, the `cv_results_` attribute will not include training scores. Computing training scores is used to get insights on how different parameter settings impact the overfitting/underfitting trade-off. However computing the scores on the training set can be computationally expensive and is not strictly required to select the parameters that yield the best generalization performance.
New in version 0.19.
Changed in version 0.21: Default value was changed from `True` to `False`
Attributes:
**cv\_results\_**dict of numpy (masked) ndarrays
A dict with keys as column headers and values as columns, that can be imported into a pandas `DataFrame`.
For instance the below given table
| param\_kernel | param\_gamma | param\_degree | split0\_test\_score | … | rank\_t… |
| --- | --- | --- | --- | --- | --- |
| ‘poly’ | – | 2 | 0.80 | … | 2 |
| ‘poly’ | – | 3 | 0.70 | … | 4 |
| ‘rbf’ | 0.1 | – | 0.80 | … | 3 |
| ‘rbf’ | 0.2 | – | 0.93 | … | 1 |
will be represented by a `cv_results_` dict of:
```
{
'param_kernel': masked_array(data = ['poly', 'poly', 'rbf', 'rbf'],
mask = [False False False False]...)
'param_gamma': masked_array(data = [-- -- 0.1 0.2],
mask = [ True True False False]...),
'param_degree': masked_array(data = [2.0 3.0 -- --],
mask = [False False True True]...),
'split0_test_score' : [0.80, 0.70, 0.80, 0.93],
'split1_test_score' : [0.82, 0.50, 0.70, 0.78],
'mean_test_score' : [0.81, 0.60, 0.75, 0.85],
'std_test_score' : [0.01, 0.10, 0.05, 0.08],
'rank_test_score' : [2, 4, 3, 1],
'split0_train_score' : [0.80, 0.92, 0.70, 0.93],
'split1_train_score' : [0.82, 0.55, 0.70, 0.87],
'mean_train_score' : [0.81, 0.74, 0.70, 0.90],
'std_train_score' : [0.01, 0.19, 0.00, 0.03],
'mean_fit_time' : [0.73, 0.63, 0.43, 0.49],
'std_fit_time' : [0.01, 0.02, 0.01, 0.01],
'mean_score_time' : [0.01, 0.06, 0.04, 0.04],
'std_score_time' : [0.00, 0.00, 0.00, 0.01],
'params' : [{'kernel': 'poly', 'degree': 2}, ...],
}
```
NOTE
The key `'params'` is used to store a list of parameter settings dicts for all the parameter candidates.
The `mean_fit_time`, `std_fit_time`, `mean_score_time` and `std_score_time` are all in seconds.
For multi-metric evaluation, the scores for all the scorers are available in the `cv_results_` dict at the keys ending with that scorer’s name (`'_<scorer_name>'`) instead of `'_score'` shown above. (‘split0\_test\_precision’, ‘mean\_train\_precision’ etc.)
**best\_estimator\_**estimator
Estimator that was chosen by the search, i.e. estimator which gave highest score (or smallest loss if specified) on the left out data. Not available if `refit=False`.
See `refit` parameter for more information on allowed values.
**best\_score\_**float
Mean cross-validated score of the best\_estimator
For multi-metric evaluation, this is present only if `refit` is specified.
This attribute is not available if `refit` is a function.
**best\_params\_**dict
Parameter setting that gave the best results on the hold out data.
For multi-metric evaluation, this is present only if `refit` is specified.
**best\_index\_**int
The index (of the `cv_results_` arrays) which corresponds to the best candidate parameter setting.
The dict at `search.cv_results_['params'][search.best_index_]` gives the parameter setting for the best model, that gives the highest mean score (`search.best_score_`).
For multi-metric evaluation, this is present only if `refit` is specified.
**scorer\_**function or a dict
Scorer function used on the held out data to choose the best parameters for the model.
For multi-metric evaluation, this attribute holds the validated `scoring` dict which maps the scorer key to the scorer callable.
**n\_splits\_**int
The number of cross-validation splits (folds/iterations).
**refit\_time\_**float
Seconds used for refitting the best model on the whole dataset.
This is present only if `refit` is not False.
New in version 0.20.
**multimetric\_**bool
Whether or not the scorers compute several metrics.
[`classes_`](#sklearn.model_selection.GridSearchCV.classes_ "sklearn.model_selection.GridSearchCV.classes_")ndarray of shape (n\_classes,)
Class labels.
[`n_features_in_`](#sklearn.model_selection.GridSearchCV.n_features_in_ "sklearn.model_selection.GridSearchCV.n_features_in_")int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Only defined if `best_estimator_` is defined (see the documentation for the `refit` parameter for more details) and that `best_estimator_` exposes `feature_names_in_` when fit.
New in version 1.0.
See also
[`ParameterGrid`](sklearn.model_selection.parametergrid#sklearn.model_selection.ParameterGrid "sklearn.model_selection.ParameterGrid")
Generates all the combinations of a hyperparameter grid.
[`train_test_split`](sklearn.model_selection.train_test_split#sklearn.model_selection.train_test_split "sklearn.model_selection.train_test_split")
Utility function to split the data into a development set usable for fitting a GridSearchCV instance and an evaluation set for its final evaluation.
[`sklearn.metrics.make_scorer`](sklearn.metrics.make_scorer#sklearn.metrics.make_scorer "sklearn.metrics.make_scorer")
Make a scorer from a performance metric or loss function.
#### Notes
The parameters selected are those that maximize the score of the left out data, unless an explicit score is passed in which case it is used instead.
If `n_jobs` was set to a value higher than one, the data is copied for each point in the grid (and not `n_jobs` times). This is done for efficiency reasons if individual jobs take very little time, but may raise errors if the dataset is large and not enough memory is available. A workaround in this case is to set `pre_dispatch`. Then, the memory is copied only `pre_dispatch` many times. A reasonable value for `pre_dispatch` is `2 *
n_jobs`.
#### Examples
```
>>> from sklearn import svm, datasets
>>> from sklearn.model_selection import GridSearchCV
>>> iris = datasets.load_iris()
>>> parameters = {'kernel':('linear', 'rbf'), 'C':[1, 10]}
>>> svc = svm.SVC()
>>> clf = GridSearchCV(svc, parameters)
>>> clf.fit(iris.data, iris.target)
GridSearchCV(estimator=SVC(),
param_grid={'C': [1, 10], 'kernel': ('linear', 'rbf')})
>>> sorted(clf.cv_results_.keys())
['mean_fit_time', 'mean_score_time', 'mean_test_score',...
'param_C', 'param_kernel', 'params',...
'rank_test_score', 'split0_test_score',...
'split2_test_score', ...
'std_fit_time', 'std_score_time', 'std_test_score']
```
#### Methods
| | |
| --- | --- |
| [`decision_function`](#sklearn.model_selection.GridSearchCV.decision_function "sklearn.model_selection.GridSearchCV.decision_function")(X) | Call decision\_function on the estimator with the best found parameters. |
| [`fit`](#sklearn.model_selection.GridSearchCV.fit "sklearn.model_selection.GridSearchCV.fit")(X[, y, groups]) | Run fit with all sets of parameters. |
| [`get_params`](#sklearn.model_selection.GridSearchCV.get_params "sklearn.model_selection.GridSearchCV.get_params")([deep]) | Get parameters for this estimator. |
| [`inverse_transform`](#sklearn.model_selection.GridSearchCV.inverse_transform "sklearn.model_selection.GridSearchCV.inverse_transform")(Xt) | Call inverse\_transform on the estimator with the best found params. |
| [`predict`](#sklearn.model_selection.GridSearchCV.predict "sklearn.model_selection.GridSearchCV.predict")(X) | Call predict on the estimator with the best found parameters. |
| [`predict_log_proba`](#sklearn.model_selection.GridSearchCV.predict_log_proba "sklearn.model_selection.GridSearchCV.predict_log_proba")(X) | Call predict\_log\_proba on the estimator with the best found parameters. |
| [`predict_proba`](#sklearn.model_selection.GridSearchCV.predict_proba "sklearn.model_selection.GridSearchCV.predict_proba")(X) | Call predict\_proba on the estimator with the best found parameters. |
| [`score`](#sklearn.model_selection.GridSearchCV.score "sklearn.model_selection.GridSearchCV.score")(X[, y]) | Return the score on the given data, if the estimator has been refit. |
| [`score_samples`](#sklearn.model_selection.GridSearchCV.score_samples "sklearn.model_selection.GridSearchCV.score_samples")(X) | Call score\_samples on the estimator with the best found parameters. |
| [`set_params`](#sklearn.model_selection.GridSearchCV.set_params "sklearn.model_selection.GridSearchCV.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.model_selection.GridSearchCV.transform "sklearn.model_selection.GridSearchCV.transform")(X) | Call transform on the estimator with the best found parameters. |
*property*classes\_
Class labels.
Only available when `refit=True` and the estimator is a classifier.
decision\_function(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L548)
Call decision\_function on the estimator with the best found parameters.
Only available if `refit=True` and the underlying estimator supports `decision_function`.
Parameters:
**X**indexable, length n\_samples
Must fulfill the input assumptions of the underlying estimator.
Returns:
**y\_score**ndarray of shape (n\_samples,) or (n\_samples, n\_classes) or (n\_samples, n\_classes \* (n\_classes-1) / 2)
Result of the decision function for `X` based on the estimator with the best found parameters.
fit(*X*, *y=None*, *\**, *groups=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L738)
Run fit with all sets of parameters.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training vector, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**array-like of shape (n\_samples, n\_output) or (n\_samples,), default=None
Target relative to X for classification or regression; None for unsupervised learning.
**groups**array-like of shape (n\_samples,), default=None
Group labels for the samples used while splitting the dataset into train/test set. Only used in conjunction with a “Group” [cv](https://scikit-learn.org/1.1/glossary.html#term-cv) instance (e.g., [`GroupKFold`](sklearn.model_selection.groupkfold#sklearn.model_selection.GroupKFold "sklearn.model_selection.GroupKFold")).
**\*\*fit\_params**dict of str -> object
Parameters passed to the `fit` method of the estimator.
If a fit parameter is an array-like whose length is equal to `num_samples` then it will be split across CV groups along with `X` and `y`. For example, the [sample\_weight](https://scikit-learn.org/1.1/glossary.html#term-sample_weight) parameter is split because `len(sample_weights) = len(X)`.
Returns:
**self**object
Instance of fitted estimator.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
inverse\_transform(*Xt*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L593)
Call inverse\_transform on the estimator with the best found params.
Only available if the underlying estimator implements `inverse_transform` and `refit=True`.
Parameters:
**Xt**indexable, length n\_samples
Must fulfill the input assumptions of the underlying estimator.
Returns:
**X**{ndarray, sparse matrix} of shape (n\_samples, n\_features)
Result of the `inverse_transform` function for `Xt` based on the estimator with the best found parameters.
*property*n\_features\_in\_
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
Only available when `refit=True`.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L480)
Call predict on the estimator with the best found parameters.
Only available if `refit=True` and the underlying estimator supports `predict`.
Parameters:
**X**indexable, length n\_samples
Must fulfill the input assumptions of the underlying estimator.
Returns:
**y\_pred**ndarray of shape (n\_samples,)
The predicted labels or values for `X` based on the estimator with the best found parameters.
predict\_log\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L525)
Call predict\_log\_proba on the estimator with the best found parameters.
Only available if `refit=True` and the underlying estimator supports `predict_log_proba`.
Parameters:
**X**indexable, length n\_samples
Must fulfill the input assumptions of the underlying estimator.
Returns:
**y\_pred**ndarray of shape (n\_samples,) or (n\_samples, n\_classes)
Predicted class log-probabilities for `X` based on the estimator with the best found parameters. The order of the classes corresponds to that in the fitted attribute [classes\_](https://scikit-learn.org/1.1/glossary.html#term-classes_).
predict\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L502)
Call predict\_proba on the estimator with the best found parameters.
Only available if `refit=True` and the underlying estimator supports `predict_proba`.
Parameters:
**X**indexable, length n\_samples
Must fulfill the input assumptions of the underlying estimator.
Returns:
**y\_pred**ndarray of shape (n\_samples,) or (n\_samples, n\_classes)
Predicted class probabilities for `X` based on the estimator with the best found parameters. The order of the classes corresponds to that in the fitted attribute [classes\_](https://scikit-learn.org/1.1/glossary.html#term-classes_).
score(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L413)
Return the score on the given data, if the estimator has been refit.
This uses the score defined by `scoring` where provided, and the `best_estimator_.score` method otherwise.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input data, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**array-like of shape (n\_samples, n\_output) or (n\_samples,), default=None
Target relative to X for classification or regression; None for unsupervised learning.
Returns:
**score**float
The score defined by `scoring` if provided, and the `best_estimator_.score` method otherwise.
score\_samples(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L457)
Call score\_samples on the estimator with the best found parameters.
Only available if `refit=True` and the underlying estimator supports `score_samples`.
New in version 0.24.
Parameters:
**X**iterable
Data to predict on. Must fulfill input requirements of the underlying estimator.
Returns:
**y\_score**ndarray of shape (n\_samples,)
The `best_estimator_.score_samples` method.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_search.py#L571)
Call transform on the estimator with the best found parameters.
Only available if the underlying estimator supports `transform` and `refit=True`.
Parameters:
**X**indexable, length n\_samples
Must fulfill the input assumptions of the underlying estimator.
Returns:
**Xt**{ndarray, sparse matrix} of shape (n\_samples, n\_features)
`X` transformed in the new space based on the estimator with the best found parameters.
Examples using `sklearn.model_selection.GridSearchCV`
-----------------------------------------------------
[Release Highlights for scikit-learn 0.24](../../auto_examples/release_highlights/plot_release_highlights_0_24_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-0-24-0-py)
[Feature agglomeration vs. univariate selection](../../auto_examples/cluster/plot_feature_agglomeration_vs_univariate_selection#sphx-glr-auto-examples-cluster-plot-feature-agglomeration-vs-univariate-selection-py)
[Shrinkage covariance estimation: LedoitWolf vs OAS and max-likelihood](../../auto_examples/covariance/plot_covariance_estimation#sphx-glr-auto-examples-covariance-plot-covariance-estimation-py)
[Model selection with Probabilistic PCA and Factor Analysis (FA)](../../auto_examples/decomposition/plot_pca_vs_fa_model_selection#sphx-glr-auto-examples-decomposition-plot-pca-vs-fa-model-selection-py)
[Comparison of kernel ridge regression and SVR](../../auto_examples/miscellaneous/plot_kernel_ridge_regression#sphx-glr-auto-examples-miscellaneous-plot-kernel-ridge-regression-py)
[Displaying Pipelines](../../auto_examples/miscellaneous/plot_pipeline_display#sphx-glr-auto-examples-miscellaneous-plot-pipeline-display-py)
[Balance model complexity and cross-validated score](../../auto_examples/model_selection/plot_grid_search_refit_callable#sphx-glr-auto-examples-model-selection-plot-grid-search-refit-callable-py)
[Comparing randomized search and grid search for hyperparameter estimation](../../auto_examples/model_selection/plot_randomized_search#sphx-glr-auto-examples-model-selection-plot-randomized-search-py)
[Comparison between grid search and successive halving](../../auto_examples/model_selection/plot_successive_halving_heatmap#sphx-glr-auto-examples-model-selection-plot-successive-halving-heatmap-py)
[Custom refit strategy of a grid search with cross-validation](../../auto_examples/model_selection/plot_grid_search_digits#sphx-glr-auto-examples-model-selection-plot-grid-search-digits-py)
[Demonstration of multi-metric evaluation on cross\_val\_score and GridSearchCV](../../auto_examples/model_selection/plot_multi_metric_evaluation#sphx-glr-auto-examples-model-selection-plot-multi-metric-evaluation-py)
[Nested versus non-nested cross-validation](../../auto_examples/model_selection/plot_nested_cross_validation_iris#sphx-glr-auto-examples-model-selection-plot-nested-cross-validation-iris-py)
[Sample pipeline for text feature extraction and evaluation](../../auto_examples/model_selection/grid_search_text_feature_extraction#sphx-glr-auto-examples-model-selection-grid-search-text-feature-extraction-py)
[Statistical comparison of models using grid search](../../auto_examples/model_selection/plot_grid_search_stats#sphx-glr-auto-examples-model-selection-plot-grid-search-stats-py)
[Caching nearest neighbors](../../auto_examples/neighbors/plot_caching_nearest_neighbors#sphx-glr-auto-examples-neighbors-plot-caching-nearest-neighbors-py)
[Kernel Density Estimation](../../auto_examples/neighbors/plot_digits_kde_sampling#sphx-glr-auto-examples-neighbors-plot-digits-kde-sampling-py)
[Column Transformer with Mixed Types](../../auto_examples/compose/plot_column_transformer_mixed_types#sphx-glr-auto-examples-compose-plot-column-transformer-mixed-types-py)
[Concatenating multiple feature extraction methods](../../auto_examples/compose/plot_feature_union#sphx-glr-auto-examples-compose-plot-feature-union-py)
[Pipelining: chaining a PCA and a logistic regression](../../auto_examples/compose/plot_digits_pipe#sphx-glr-auto-examples-compose-plot-digits-pipe-py)
[Selecting dimensionality reduction with Pipeline and GridSearchCV](../../auto_examples/compose/plot_compare_reduction#sphx-glr-auto-examples-compose-plot-compare-reduction-py)
[Feature discretization](../../auto_examples/preprocessing/plot_discretization_classification#sphx-glr-auto-examples-preprocessing-plot-discretization-classification-py)
[RBF SVM parameters](../../auto_examples/svm/plot_rbf_parameters#sphx-glr-auto-examples-svm-plot-rbf-parameters-py)
[Scaling the regularization parameter for SVCs](../../auto_examples/svm/plot_svm_scale_c#sphx-glr-auto-examples-svm-plot-svm-scale-c-py)
[Cross-validation on diabetes Dataset Exercise](../../auto_examples/exercises/plot_cv_diabetes#sphx-glr-auto-examples-exercises-plot-cv-diabetes-py)
| programming_docs |
scikit_learn sklearn.gaussian_process.kernels.Exponentiation sklearn.gaussian\_process.kernels.Exponentiation
================================================
*class*sklearn.gaussian\_process.kernels.Exponentiation(*kernel*, *exponent*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L971)
The Exponentiation kernel takes one base kernel and a scalar parameter \(p\) and combines them via
\[k\_{exp}(X, Y) = k(X, Y) ^p\] Note that the `__pow__` magic method is overridden, so `Exponentiation(RBF(), 2)` is equivalent to using the \*\* operator with `RBF() ** 2`.
Read more in the [User Guide](../gaussian_process#gp-kernels).
New in version 0.18.
Parameters:
**kernel**Kernel
The base kernel
**exponent**float
The exponent for the base kernel
Attributes:
[`bounds`](#sklearn.gaussian_process.kernels.Exponentiation.bounds "sklearn.gaussian_process.kernels.Exponentiation.bounds")
Returns the log-transformed bounds on the theta.
[`hyperparameters`](#sklearn.gaussian_process.kernels.Exponentiation.hyperparameters "sklearn.gaussian_process.kernels.Exponentiation.hyperparameters")
Returns a list of all hyperparameter.
[`n_dims`](#sklearn.gaussian_process.kernels.Exponentiation.n_dims "sklearn.gaussian_process.kernels.Exponentiation.n_dims")
Returns the number of non-fixed hyperparameters of the kernel.
[`requires_vector_input`](#sklearn.gaussian_process.kernels.Exponentiation.requires_vector_input "sklearn.gaussian_process.kernels.Exponentiation.requires_vector_input")
Returns whether the kernel is defined on discrete structures.
[`theta`](#sklearn.gaussian_process.kernels.Exponentiation.theta "sklearn.gaussian_process.kernels.Exponentiation.theta")
Returns the (flattened, log-transformed) non-fixed hyperparameters.
#### Examples
```
>>> from sklearn.datasets import make_friedman2
>>> from sklearn.gaussian_process import GaussianProcessRegressor
>>> from sklearn.gaussian_process.kernels import (RationalQuadratic,
... Exponentiation)
>>> X, y = make_friedman2(n_samples=500, noise=0, random_state=0)
>>> kernel = Exponentiation(RationalQuadratic(), exponent=2)
>>> gpr = GaussianProcessRegressor(kernel=kernel, alpha=5,
... random_state=0).fit(X, y)
>>> gpr.score(X, y)
0.419...
>>> gpr.predict(X[:1,:], return_std=True)
(array([635.5...]), array([0.559...]))
```
#### Methods
| | |
| --- | --- |
| [`__call__`](#sklearn.gaussian_process.kernels.Exponentiation.__call__ "sklearn.gaussian_process.kernels.Exponentiation.__call__")(X[, Y, eval\_gradient]) | Return the kernel k(X, Y) and optionally its gradient. |
| [`clone_with_theta`](#sklearn.gaussian_process.kernels.Exponentiation.clone_with_theta "sklearn.gaussian_process.kernels.Exponentiation.clone_with_theta")(theta) | Returns a clone of self with given hyperparameters theta. |
| [`diag`](#sklearn.gaussian_process.kernels.Exponentiation.diag "sklearn.gaussian_process.kernels.Exponentiation.diag")(X) | Returns the diagonal of the kernel k(X, X). |
| [`get_params`](#sklearn.gaussian_process.kernels.Exponentiation.get_params "sklearn.gaussian_process.kernels.Exponentiation.get_params")([deep]) | Get parameters of this kernel. |
| [`is_stationary`](#sklearn.gaussian_process.kernels.Exponentiation.is_stationary "sklearn.gaussian_process.kernels.Exponentiation.is_stationary")() | Returns whether the kernel is stationary. |
| [`set_params`](#sklearn.gaussian_process.kernels.Exponentiation.set_params "sklearn.gaussian_process.kernels.Exponentiation.set_params")(\*\*params) | Set the parameters of this kernel. |
\_\_call\_\_(*X*, *Y=None*, *eval\_gradient=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L1094)
Return the kernel k(X, Y) and optionally its gradient.
Parameters:
**X**array-like of shape (n\_samples\_X, n\_features) or list of object
Left argument of the returned kernel k(X, Y)
**Y**array-like of shape (n\_samples\_Y, n\_features) or list of object, default=None
Right argument of the returned kernel k(X, Y). If None, k(X, X) is evaluated instead.
**eval\_gradient**bool, default=False
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed.
Returns:
**K**ndarray of shape (n\_samples\_X, n\_samples\_Y)
Kernel k(X, Y)
**K\_gradient**ndarray of shape (n\_samples\_X, n\_samples\_X, n\_dims), optional
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when `eval_gradient` is True.
*property*bounds
Returns the log-transformed bounds on the theta.
Returns:
**bounds**ndarray of shape (n\_dims, 2)
The log-transformed bounds on the kernel’s hyperparameters theta
clone\_with\_theta(*theta*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L238)
Returns a clone of self with given hyperparameters theta.
Parameters:
**theta**ndarray of shape (n\_dims,)
The hyperparameters
diag(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L1130)
Returns the diagonal of the kernel k(X, X).
The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated.
Parameters:
**X**array-like of shape (n\_samples\_X, n\_features) or list of object
Argument to the kernel.
Returns:
**K\_diag**ndarray of shape (n\_samples\_X,)
Diagonal of kernel k(X, X)
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L1016)
Get parameters of this kernel.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
*property*hyperparameters
Returns a list of all hyperparameter.
is\_stationary()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L1152)
Returns whether the kernel is stationary.
*property*n\_dims
Returns the number of non-fixed hyperparameters of the kernel.
*property*requires\_vector\_input
Returns whether the kernel is defined on discrete structures.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L198)
Set the parameters of this kernel.
The method works on simple kernels as well as on nested kernels. The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Returns:
self
*property*theta
Returns the (flattened, log-transformed) non-fixed hyperparameters.
Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale.
Returns:
**theta**ndarray of shape (n\_dims,)
The non-fixed, log-transformed hyperparameters of the kernel
scikit_learn sklearn.cluster.KMeans sklearn.cluster.KMeans
======================
*class*sklearn.cluster.KMeans(*n\_clusters=8*, *\**, *init='k-means++'*, *n\_init=10*, *max\_iter=300*, *tol=0.0001*, *verbose=0*, *random\_state=None*, *copy\_x=True*, *algorithm='lloyd'*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_kmeans.py#L1126)
K-Means clustering.
Read more in the [User Guide](../clustering#k-means).
Parameters:
**n\_clusters**int, default=8
The number of clusters to form as well as the number of centroids to generate.
**init**{‘k-means++’, ‘random’}, callable or array-like of shape (n\_clusters, n\_features), default=’k-means++’
Method for initialization:
‘k-means++’ : selects initial cluster centroids using sampling based on an empirical probability distribution of the points’ contribution to the overall inertia. This technique speeds up convergence, and is theoretically proven to be \(\mathcal{O}(\log k)\)-optimal. See the description of `n_init` for more details.
‘random’: choose `n_clusters` observations (rows) at random from data for the initial centroids.
If an array is passed, it should be of shape (n\_clusters, n\_features) and gives the initial centers.
If a callable is passed, it should take arguments X, n\_clusters and a random state and return an initialization.
**n\_init**int, default=10
Number of time the k-means algorithm will be run with different centroid seeds. The final results will be the best output of n\_init consecutive runs in terms of inertia.
**max\_iter**int, default=300
Maximum number of iterations of the k-means algorithm for a single run.
**tol**float, default=1e-4
Relative tolerance with regards to Frobenius norm of the difference in the cluster centers of two consecutive iterations to declare convergence.
**verbose**int, default=0
Verbosity mode.
**random\_state**int, RandomState instance or None, default=None
Determines random number generation for centroid initialization. Use an int to make the randomness deterministic. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**copy\_x**bool, default=True
When pre-computing distances it is more numerically accurate to center the data first. If copy\_x is True (default), then the original data is not modified. If False, the original data is modified, and put back before the function returns, but small numerical differences may be introduced by subtracting and then adding the data mean. Note that if the original data is not C-contiguous, a copy will be made even if copy\_x is False. If the original data is sparse, but not in CSR format, a copy will be made even if copy\_x is False.
**algorithm**{“lloyd”, “elkan”, “auto”, “full”}, default=”lloyd”
K-means algorithm to use. The classical EM-style algorithm is `"lloyd"`. The `"elkan"` variation can be more efficient on some datasets with well-defined clusters, by using the triangle inequality. However it’s more memory intensive due to the allocation of an extra array of shape `(n_samples, n_clusters)`.
`"auto"` and `"full"` are deprecated and they will be removed in Scikit-Learn 1.3. They are both aliases for `"lloyd"`.
Changed in version 0.18: Added Elkan algorithm
Changed in version 1.1: Renamed “full” to “lloyd”, and deprecated “auto” and “full”. Changed “auto” to use “lloyd” instead of “elkan”.
Attributes:
**cluster\_centers\_**ndarray of shape (n\_clusters, n\_features)
Coordinates of cluster centers. If the algorithm stops before fully converging (see `tol` and `max_iter`), these will not be consistent with `labels_`.
**labels\_**ndarray of shape (n\_samples,)
Labels of each point
**inertia\_**float
Sum of squared distances of samples to their closest cluster center, weighted by the sample weights if provided.
**n\_iter\_**int
Number of iterations run.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`MiniBatchKMeans`](sklearn.cluster.minibatchkmeans#sklearn.cluster.MiniBatchKMeans "sklearn.cluster.MiniBatchKMeans")
Alternative online implementation that does incremental updates of the centers positions using mini-batches. For large scale learning (say n\_samples > 10k) MiniBatchKMeans is probably much faster than the default batch implementation.
#### Notes
The k-means problem is solved using either Lloyd’s or Elkan’s algorithm.
The average complexity is given by O(k n T), where n is the number of samples and T is the number of iteration.
The worst case complexity is given by O(n^(k+2/p)) with n = n\_samples, p = n\_features. (D. Arthur and S. Vassilvitskii, ‘How slow is the k-means method?’ SoCG2006)
In practice, the k-means algorithm is very fast (one of the fastest clustering algorithms available), but it falls in local minima. That’s why it can be useful to restart it several times.
If the algorithm stops before fully converging (because of `tol` or `max_iter`), `labels_` and `cluster_centers_` will not be consistent, i.e. the `cluster_centers_` will not be the means of the points in each cluster. Also, the estimator will reassign `labels_` after the last iteration to make `labels_` consistent with `predict` on the training set.
#### Examples
```
>>> from sklearn.cluster import KMeans
>>> import numpy as np
>>> X = np.array([[1, 2], [1, 4], [1, 0],
... [10, 2], [10, 4], [10, 0]])
>>> kmeans = KMeans(n_clusters=2, random_state=0).fit(X)
>>> kmeans.labels_
array([1, 1, 1, 0, 0, 0], dtype=int32)
>>> kmeans.predict([[0, 0], [12, 3]])
array([1, 0], dtype=int32)
>>> kmeans.cluster_centers_
array([[10., 2.],
[ 1., 2.]])
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.cluster.KMeans.fit "sklearn.cluster.KMeans.fit")(X[, y, sample\_weight]) | Compute k-means clustering. |
| [`fit_predict`](#sklearn.cluster.KMeans.fit_predict "sklearn.cluster.KMeans.fit_predict")(X[, y, sample\_weight]) | Compute cluster centers and predict cluster index for each sample. |
| [`fit_transform`](#sklearn.cluster.KMeans.fit_transform "sklearn.cluster.KMeans.fit_transform")(X[, y, sample\_weight]) | Compute clustering and transform X to cluster-distance space. |
| [`get_feature_names_out`](#sklearn.cluster.KMeans.get_feature_names_out "sklearn.cluster.KMeans.get_feature_names_out")([input\_features]) | Get output feature names for transformation. |
| [`get_params`](#sklearn.cluster.KMeans.get_params "sklearn.cluster.KMeans.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.cluster.KMeans.predict "sklearn.cluster.KMeans.predict")(X[, sample\_weight]) | Predict the closest cluster each sample in X belongs to. |
| [`score`](#sklearn.cluster.KMeans.score "sklearn.cluster.KMeans.score")(X[, y, sample\_weight]) | Opposite of the value of X on the K-means objective. |
| [`set_params`](#sklearn.cluster.KMeans.set_params "sklearn.cluster.KMeans.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.cluster.KMeans.transform "sklearn.cluster.KMeans.transform")(X) | Transform X to a cluster-distance space. |
fit(*X*, *y=None*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_kmeans.py#L1341)
Compute k-means clustering.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Training instances to cluster. It must be noted that the data will be converted to C ordering, which will cause a memory copy if the given data is not C-contiguous. If a sparse matrix is passed, a copy will be made if it’s not in CSR format.
**y**Ignored
Not used, present here for API consistency by convention.
**sample\_weight**array-like of shape (n\_samples,), default=None
The weights for each observation in X. If None, all observations are assigned equal weight.
New in version 0.20.
Returns:
**self**object
Fitted estimator.
fit\_predict(*X*, *y=None*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_kmeans.py#L973)
Compute cluster centers and predict cluster index for each sample.
Convenience method; equivalent to calling fit(X) followed by predict(X).
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
New data to transform.
**y**Ignored
Not used, present here for API consistency by convention.
**sample\_weight**array-like of shape (n\_samples,), default=None
The weights for each observation in X. If None, all observations are assigned equal weight.
Returns:
**labels**ndarray of shape (n\_samples,)
Index of the cluster each sample belongs to.
fit\_transform(*X*, *y=None*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_kmeans.py#L1035)
Compute clustering and transform X to cluster-distance space.
Equivalent to fit(X).transform(X), but more efficiently implemented.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
New data to transform.
**y**Ignored
Not used, present here for API consistency by convention.
**sample\_weight**array-like of shape (n\_samples,), default=None
The weights for each observation in X. If None, all observations are assigned equal weight.
Returns:
**X\_new**ndarray of shape (n\_samples, n\_clusters)
X transformed in the new space.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L909)
Get output feature names for transformation.
Parameters:
**input\_features**array-like of str or None, default=None
Only used to validate feature names with the names seen in [`fit`](#sklearn.cluster.KMeans.fit "sklearn.cluster.KMeans.fit").
Returns:
**feature\_names\_out**ndarray of str objects
Transformed feature names.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
predict(*X*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_kmeans.py#L998)
Predict the closest cluster each sample in X belongs to.
In the vector quantization literature, `cluster_centers_` is called the code book and each value returned by `predict` is the index of the closest code in the code book.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
New data to predict.
**sample\_weight**array-like of shape (n\_samples,), default=None
The weights for each observation in X. If None, all observations are assigned equal weight.
Returns:
**labels**ndarray of shape (n\_samples,)
Index of the cluster each sample belongs to.
score(*X*, *y=None*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_kmeans.py#L1085)
Opposite of the value of X on the K-means objective.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
New data.
**y**Ignored
Not used, present here for API consistency by convention.
**sample\_weight**array-like of shape (n\_samples,), default=None
The weights for each observation in X. If None, all observations are assigned equal weight.
Returns:
**score**float
Opposite of the value of X on the K-means objective.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_kmeans.py#L1059)
Transform X to a cluster-distance space.
In the new space, each dimension is the distance to the cluster centers. Note that even if X is sparse, the array returned by `transform` will typically be dense.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
New data to transform.
Returns:
**X\_new**ndarray of shape (n\_samples, n\_clusters)
X transformed in the new space.
Examples using `sklearn.cluster.KMeans`
---------------------------------------
[Release Highlights for scikit-learn 1.1](../../auto_examples/release_highlights/plot_release_highlights_1_1_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-1-1-0-py)
[Release Highlights for scikit-learn 0.23](../../auto_examples/release_highlights/plot_release_highlights_0_23_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-0-23-0-py)
[A demo of K-Means clustering on the handwritten digits data](../../auto_examples/cluster/plot_kmeans_digits#sphx-glr-auto-examples-cluster-plot-kmeans-digits-py)
[Bisecting K-Means and Regular K-Means Performance Comparison](../../auto_examples/cluster/plot_bisect_kmeans#sphx-glr-auto-examples-cluster-plot-bisect-kmeans-py)
[Color Quantization using K-Means](../../auto_examples/cluster/plot_color_quantization#sphx-glr-auto-examples-cluster-plot-color-quantization-py)
[Comparison of the K-Means and MiniBatchKMeans clustering algorithms](../../auto_examples/cluster/plot_mini_batch_kmeans#sphx-glr-auto-examples-cluster-plot-mini-batch-kmeans-py)
[Demonstration of k-means assumptions](../../auto_examples/cluster/plot_kmeans_assumptions#sphx-glr-auto-examples-cluster-plot-kmeans-assumptions-py)
[Empirical evaluation of the impact of k-means initialization](../../auto_examples/cluster/plot_kmeans_stability_low_dim_dense#sphx-glr-auto-examples-cluster-plot-kmeans-stability-low-dim-dense-py)
[K-means Clustering](../../auto_examples/cluster/plot_cluster_iris#sphx-glr-auto-examples-cluster-plot-cluster-iris-py)
[Selecting the number of clusters with silhouette analysis on KMeans clustering](../../auto_examples/cluster/plot_kmeans_silhouette_analysis#sphx-glr-auto-examples-cluster-plot-kmeans-silhouette-analysis-py)
[Vector Quantization Example](../../auto_examples/cluster/plot_face_compress#sphx-glr-auto-examples-cluster-plot-face-compress-py)
[Clustering text documents using k-means](../../auto_examples/text/plot_document_clustering#sphx-glr-auto-examples-text-plot-document-clustering-py)
| programming_docs |
scikit_learn sklearn.cluster.cluster_optics_xi sklearn.cluster.cluster\_optics\_xi
===================================
sklearn.cluster.cluster\_optics\_xi(*\**, *reachability*, *predecessor*, *ordering*, *min\_samples*, *min\_cluster\_size=None*, *xi=0.05*, *predecessor\_correction=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_optics.py#L661)
Automatically extract clusters according to the Xi-steep method.
Parameters:
**reachability**ndarray of shape (n\_samples,)
Reachability distances calculated by OPTICS (`reachability_`).
**predecessor**ndarray of shape (n\_samples,)
Predecessors calculated by OPTICS.
**ordering**ndarray of shape (n\_samples,)
OPTICS ordered point indices (`ordering_`).
**min\_samples**int > 1 or float between 0 and 1
The same as the min\_samples given to OPTICS. Up and down steep regions can’t have more then `min_samples` consecutive non-steep points. Expressed as an absolute number or a fraction of the number of samples (rounded to be at least 2).
**min\_cluster\_size**int > 1 or float between 0 and 1, default=None
Minimum number of samples in an OPTICS cluster, expressed as an absolute number or a fraction of the number of samples (rounded to be at least 2). If `None`, the value of `min_samples` is used instead.
**xi**float between 0 and 1, default=0.05
Determines the minimum steepness on the reachability plot that constitutes a cluster boundary. For example, an upwards point in the reachability plot is defined by the ratio from one point to its successor being at most 1-xi.
**predecessor\_correction**bool, default=True
Correct clusters based on the calculated predecessors.
Returns:
**labels**ndarray of shape (n\_samples,)
The labels assigned to samples. Points which are not included in any cluster are labeled as -1.
**clusters**ndarray of shape (n\_clusters, 2)
The list of clusters in the form of `[start, end]` in each row, with all indices inclusive. The clusters are ordered according to `(end,
-start)` (ascending) so that larger clusters encompassing smaller clusters come after such nested smaller clusters. Since `labels` does not reflect the hierarchy, usually `len(clusters) >
np.unique(labels)`.
scikit_learn sklearn.linear_model.SGDClassifier sklearn.linear\_model.SGDClassifier
===================================
*class*sklearn.linear\_model.SGDClassifier(*loss='hinge'*, *\**, *penalty='l2'*, *alpha=0.0001*, *l1\_ratio=0.15*, *fit\_intercept=True*, *max\_iter=1000*, *tol=0.001*, *shuffle=True*, *verbose=0*, *epsilon=0.1*, *n\_jobs=None*, *random\_state=None*, *learning\_rate='optimal'*, *eta0=0.0*, *power\_t=0.5*, *early\_stopping=False*, *validation\_fraction=0.1*, *n\_iter\_no\_change=5*, *class\_weight=None*, *warm\_start=False*, *average=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_stochastic_gradient.py#L903)
Linear classifiers (SVM, logistic regression, etc.) with SGD training.
This estimator implements regularized linear models with stochastic gradient descent (SGD) learning: the gradient of the loss is estimated each sample at a time and the model is updated along the way with a decreasing strength schedule (aka learning rate). SGD allows minibatch (online/out-of-core) learning via the `partial_fit` method. For best results using the default learning rate schedule, the data should have zero mean and unit variance.
This implementation works with data represented as dense or sparse arrays of floating point values for the features. The model it fits can be controlled with the loss parameter; by default, it fits a linear support vector machine (SVM).
The regularizer is a penalty added to the loss function that shrinks model parameters towards the zero vector using either the squared euclidean norm L2 or the absolute norm L1 or a combination of both (Elastic Net). If the parameter update crosses the 0.0 value because of the regularizer, the update is truncated to 0.0 to allow for learning sparse models and achieve online feature selection.
Read more in the [User Guide](../sgd#sgd).
Parameters:
**loss**{‘hinge’, ‘log\_loss’, ‘log’, ‘modified\_huber’, ‘squared\_hinge’, ‘perceptron’, ‘squared\_error’, ‘huber’, ‘epsilon\_insensitive’, ‘squared\_epsilon\_insensitive’}, default=’hinge’
The loss function to be used.
* ‘hinge’ gives a linear SVM.
* ‘log\_loss’ gives logistic regression, a probabilistic classifier.
* ‘modified\_huber’ is another smooth loss that brings tolerance to
outliers as well as probability estimates.
* ‘squared\_hinge’ is like hinge but is quadratically penalized.
* ‘perceptron’ is the linear loss used by the perceptron algorithm.
* The other losses, ‘squared\_error’, ‘huber’, ‘epsilon\_insensitive’ and ‘squared\_epsilon\_insensitive’ are designed for regression but can be useful in classification as well; see [`SGDRegressor`](sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor "sklearn.linear_model.SGDRegressor") for a description.
More details about the losses formulas can be found in the [User Guide](../sgd#sgd-mathematical-formulation).
Deprecated since version 1.0: The loss ‘squared\_loss’ was deprecated in v1.0 and will be removed in version 1.2. Use `loss='squared_error'` which is equivalent.
Deprecated since version 1.1: The loss ‘log’ was deprecated in v1.1 and will be removed in version 1.3. Use `loss='log_loss'` which is equivalent.
**penalty**{‘l2’, ‘l1’, ‘elasticnet’}, default=’l2’
The penalty (aka regularization term) to be used. Defaults to ‘l2’ which is the standard regularizer for linear SVM models. ‘l1’ and ‘elasticnet’ might bring sparsity to the model (feature selection) not achievable with ‘l2’.
**alpha**float, default=0.0001
Constant that multiplies the regularization term. The higher the value, the stronger the regularization. Also used to compute the learning rate when set to `learning_rate` is set to ‘optimal’. Values must be in the range `[0.0, inf)`.
**l1\_ratio**float, default=0.15
The Elastic Net mixing parameter, with 0 <= l1\_ratio <= 1. l1\_ratio=0 corresponds to L2 penalty, l1\_ratio=1 to L1. Only used if `penalty` is ‘elasticnet’. Values must be in the range `[0.0, 1.0]`.
**fit\_intercept**bool, default=True
Whether the intercept should be estimated or not. If False, the data is assumed to be already centered.
**max\_iter**int, default=1000
The maximum number of passes over the training data (aka epochs). It only impacts the behavior in the `fit` method, and not the [`partial_fit`](#sklearn.linear_model.SGDClassifier.partial_fit "sklearn.linear_model.SGDClassifier.partial_fit") method. Values must be in the range `[1, inf)`.
New in version 0.19.
**tol**float, default=1e-3
The stopping criterion. If it is not None, training will stop when (loss > best\_loss - tol) for `n_iter_no_change` consecutive epochs. Convergence is checked against the training loss or the validation loss depending on the `early_stopping` parameter. Values must be in the range `[0.0, inf)`.
New in version 0.19.
**shuffle**bool, default=True
Whether or not the training data should be shuffled after each epoch.
**verbose**int, default=0
The verbosity level. Values must be in the range `[0, inf)`.
**epsilon**float, default=0.1
Epsilon in the epsilon-insensitive loss functions; only if `loss` is ‘huber’, ‘epsilon\_insensitive’, or ‘squared\_epsilon\_insensitive’. For ‘huber’, determines the threshold at which it becomes less important to get the prediction exactly right. For epsilon-insensitive, any differences between the current prediction and the correct label are ignored if they are less than this threshold. Values must be in the range `[0.0, inf)`.
**n\_jobs**int, default=None
The number of CPUs to use to do the OVA (One Versus All, for multi-class problems) computation. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
**random\_state**int, RandomState instance, default=None
Used for shuffling the data, when `shuffle` is set to `True`. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state). Integer values must be in the range `[0, 2**32 - 1]`.
**learning\_rate**str, default=’optimal’
The learning rate schedule:
* ‘constant’: `eta = eta0`
* ‘optimal’: `eta = 1.0 / (alpha * (t + t0))` where `t0` is chosen by a heuristic proposed by Leon Bottou.
* ‘invscaling’: `eta = eta0 / pow(t, power_t)`
* ‘adaptive’: `eta = eta0`, as long as the training keeps decreasing. Each time n\_iter\_no\_change consecutive epochs fail to decrease the training loss by tol or fail to increase validation score by tol if `early_stopping` is `True`, the current learning rate is divided by 5.
New in version 0.20: Added ‘adaptive’ option
**eta0**float, default=0.0
The initial learning rate for the ‘constant’, ‘invscaling’ or ‘adaptive’ schedules. The default value is 0.0 as eta0 is not used by the default schedule ‘optimal’. Values must be in the range `(0.0, inf)`.
**power\_t**float, default=0.5
The exponent for inverse scaling learning rate [default 0.5]. Values must be in the range `(-inf, inf)`.
**early\_stopping**bool, default=False
Whether to use early stopping to terminate training when validation score is not improving. If set to `True`, it will automatically set aside a stratified fraction of training data as validation and terminate training when validation score returned by the `score` method is not improving by at least tol for n\_iter\_no\_change consecutive epochs.
New in version 0.20: Added ‘early\_stopping’ option
**validation\_fraction**float, default=0.1
The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if `early_stopping` is True. Values must be in the range `(0.0, 1.0)`.
New in version 0.20: Added ‘validation\_fraction’ option
**n\_iter\_no\_change**int, default=5
Number of iterations with no improvement to wait before stopping fitting. Convergence is checked against the training loss or the validation loss depending on the `early_stopping` parameter. Integer values must be in the range `[1, max_iter)`.
New in version 0.20: Added ‘n\_iter\_no\_change’ option
**class\_weight**dict, {class\_label: weight} or “balanced”, default=None
Preset for the class\_weight fit parameter.
Weights associated with classes. If not given, all classes are supposed to have weight one.
The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as `n_samples / (n_classes * np.bincount(y))`.
**warm\_start**bool, default=False
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See [the Glossary](https://scikit-learn.org/1.1/glossary.html#term-warm_start).
Repeatedly calling fit or partial\_fit when warm\_start is True can result in a different solution than when calling fit a single time because of the way the data is shuffled. If a dynamic learning rate is used, the learning rate is adapted depending on the number of samples already seen. Calling `fit` resets this counter, while `partial_fit` will result in increasing the existing counter.
**average**bool or int, default=False
When set to `True`, computes the averaged SGD weights across all updates and stores the result in the `coef_` attribute. If set to an int greater than 1, averaging will begin once the total number of samples seen reaches `average`. So `average=10` will begin averaging after seeing 10 samples. Integer values must be in the range `[1, n_samples]`.
Attributes:
**coef\_**ndarray of shape (1, n\_features) if n\_classes == 2 else (n\_classes, n\_features)
Weights assigned to the features.
**intercept\_**ndarray of shape (1,) if n\_classes == 2 else (n\_classes,)
Constants in decision function.
**n\_iter\_**int
The actual number of iterations before reaching the stopping criterion. For multiclass fits, it is the maximum over every binary fit.
**loss\_function\_**concrete `LossFunction`
**classes\_**array of shape (n\_classes,)
**t\_**int
Number of weight updates performed during training. Same as `(n_iter_ * n_samples)`.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`sklearn.svm.LinearSVC`](sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC")
Linear support vector classification.
[`LogisticRegression`](sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression "sklearn.linear_model.LogisticRegression")
Logistic regression.
[`Perceptron`](sklearn.linear_model.perceptron#sklearn.linear_model.Perceptron "sklearn.linear_model.Perceptron")
Inherits from SGDClassifier. `Perceptron()` is equivalent to `SGDClassifier(loss="perceptron", eta0=1, learning_rate="constant", penalty=None)`.
#### Examples
```
>>> import numpy as np
>>> from sklearn.linear_model import SGDClassifier
>>> from sklearn.preprocessing import StandardScaler
>>> from sklearn.pipeline import make_pipeline
>>> X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]])
>>> Y = np.array([1, 1, 2, 2])
>>> # Always scale the input. The most convenient way is to use a pipeline.
>>> clf = make_pipeline(StandardScaler(),
... SGDClassifier(max_iter=1000, tol=1e-3))
>>> clf.fit(X, Y)
Pipeline(steps=[('standardscaler', StandardScaler()),
('sgdclassifier', SGDClassifier())])
>>> print(clf.predict([[-0.8, -1]]))
[1]
```
#### Methods
| | |
| --- | --- |
| [`decision_function`](#sklearn.linear_model.SGDClassifier.decision_function "sklearn.linear_model.SGDClassifier.decision_function")(X) | Predict confidence scores for samples. |
| [`densify`](#sklearn.linear_model.SGDClassifier.densify "sklearn.linear_model.SGDClassifier.densify")() | Convert coefficient matrix to dense array format. |
| [`fit`](#sklearn.linear_model.SGDClassifier.fit "sklearn.linear_model.SGDClassifier.fit")(X, y[, coef\_init, intercept\_init, ...]) | Fit linear model with Stochastic Gradient Descent. |
| [`get_params`](#sklearn.linear_model.SGDClassifier.get_params "sklearn.linear_model.SGDClassifier.get_params")([deep]) | Get parameters for this estimator. |
| [`partial_fit`](#sklearn.linear_model.SGDClassifier.partial_fit "sklearn.linear_model.SGDClassifier.partial_fit")(X, y[, classes, sample\_weight]) | Perform one epoch of stochastic gradient descent on given samples. |
| [`predict`](#sklearn.linear_model.SGDClassifier.predict "sklearn.linear_model.SGDClassifier.predict")(X) | Predict class labels for samples in X. |
| [`predict_log_proba`](#sklearn.linear_model.SGDClassifier.predict_log_proba "sklearn.linear_model.SGDClassifier.predict_log_proba")(X) | Log of probability estimates. |
| [`predict_proba`](#sklearn.linear_model.SGDClassifier.predict_proba "sklearn.linear_model.SGDClassifier.predict_proba")(X) | Probability estimates. |
| [`score`](#sklearn.linear_model.SGDClassifier.score "sklearn.linear_model.SGDClassifier.score")(X, y[, sample\_weight]) | Return the mean accuracy on the given test data and labels. |
| [`set_params`](#sklearn.linear_model.SGDClassifier.set_params "sklearn.linear_model.SGDClassifier.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`sparsify`](#sklearn.linear_model.SGDClassifier.sparsify "sklearn.linear_model.SGDClassifier.sparsify")() | Convert coefficient matrix to sparse format. |
decision\_function(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L408)
Predict confidence scores for samples.
The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The data matrix for which we want to get the confidence scores.
Returns:
**scores**ndarray of shape (n\_samples,) or (n\_samples, n\_classes)
Confidence scores per `(n_samples, n_classes)` combination. In the binary case, confidence score for `self.classes_[1]` where >0 means this class would be predicted.
densify()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L477)
Convert coefficient matrix to dense array format.
Converts the `coef_` member (back) to a numpy.ndarray. This is the default format of `coef_` and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op.
Returns:
self
Fitted estimator.
fit(*X*, *y*, *coef\_init=None*, *intercept\_init=None*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_stochastic_gradient.py#L862)
Fit linear model with Stochastic Gradient Descent.
Parameters:
**X**{array-like, sparse matrix}, shape (n\_samples, n\_features)
Training data.
**y**ndarray of shape (n\_samples,)
Target values.
**coef\_init**ndarray of shape (n\_classes, n\_features), default=None
The initial coefficients to warm-start the optimization.
**intercept\_init**ndarray of shape (n\_classes,), default=None
The initial intercept to warm-start the optimization.
**sample\_weight**array-like, shape (n\_samples,), default=None
Weights applied to individual samples. If not provided, uniform weights are assumed. These weights will be multiplied with class\_weight (passed through the constructor) if class\_weight is specified.
Returns:
**self**object
Returns an instance of self.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
partial\_fit(*X*, *y*, *classes=None*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_stochastic_gradient.py#L802)
Perform one epoch of stochastic gradient descent on given samples.
Internally, this method uses `max_iter = 1`. Therefore, it is not guaranteed that a minimum of the cost function is reached after calling it once. Matters such as objective convergence, early stopping, and learning rate adjustments should be handled by the user.
Parameters:
**X**{array-like, sparse matrix}, shape (n\_samples, n\_features)
Subset of the training data.
**y**ndarray of shape (n\_samples,)
Subset of the target values.
**classes**ndarray of shape (n\_classes,), default=None
Classes across all calls to partial\_fit. Can be obtained by via `np.unique(y_all)`, where y\_all is the target vector of the entire dataset. This argument is required for the first call to partial\_fit and can be omitted in the subsequent calls. Note that y doesn’t need to contain all labels in `classes`.
**sample\_weight**array-like, shape (n\_samples,), default=None
Weights applied to individual samples. If not provided, uniform weights are assumed.
Returns:
**self**object
Returns an instance of self.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L433)
Predict class labels for samples in X.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The data matrix for which we want to get the predictions.
Returns:
**y\_pred**ndarray of shape (n\_samples,)
Vector containing the class labels for each sample.
predict\_log\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_stochastic_gradient.py#L1310)
Log of probability estimates.
This method is only available for log loss and modified Huber loss.
When loss=”modified\_huber”, probability estimates may be hard zeros and ones, so taking the logarithm is not possible.
See `predict_proba` for details.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Input data for prediction.
Returns:
**T**array-like, shape (n\_samples, n\_classes)
Returns the log-probability of the sample for each class in the model, where classes are ordered as they are in `self.classes_`.
predict\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_stochastic_gradient.py#L1227)
Probability estimates.
This method is only available for log loss and modified Huber loss.
Multiclass probability estimates are derived from binary (one-vs.-rest) estimates by simple normalization, as recommended by Zadrozny and Elkan.
Binary probability estimates for loss=”modified\_huber” are given by (clip(decision\_function(X), -1, 1) + 1) / 2. For other loss functions it is necessary to perform proper probability calibration by wrapping the classifier with [`CalibratedClassifierCV`](sklearn.calibration.calibratedclassifiercv#sklearn.calibration.CalibratedClassifierCV "sklearn.calibration.CalibratedClassifierCV") instead.
Parameters:
**X**{array-like, sparse matrix}, shape (n\_samples, n\_features)
Input data for prediction.
Returns:
ndarray of shape (n\_samples, n\_classes)
Returns the probability of the sample for each class in the model, where classes are ordered as they are in `self.classes_`.
#### References
Zadrozny and Elkan, “Transforming classifier scores into multiclass probability estimates”, SIGKDD’02, <https://dl.acm.org/doi/pdf/10.1145/775047.775151>
The justification for the formula in the loss=”modified\_huber” case is in the appendix B in: <http://jmlr.csail.mit.edu/papers/volume2/zhang02c/zhang02c.pdf>
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L640)
Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True labels for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
Mean accuracy of `self.predict(X)` wrt. `y`.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
sparsify()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L497)
Convert coefficient matrix to sparse format.
Converts the `coef_` member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation.
The `intercept_` member is not converted.
Returns:
self
Fitted estimator.
#### Notes
For non-sparse models, i.e. when there are not many zeros in `coef_`, this may actually *increase* memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with `(coef_ == 0).sum()`, must be more than 50% for this to provide significant benefits.
After calling this method, further fitting with the partial\_fit method (if any) will not work until you call densify.
Examples using `sklearn.linear_model.SGDClassifier`
---------------------------------------------------
[Model Complexity Influence](../../auto_examples/applications/plot_model_complexity_influence#sphx-glr-auto-examples-applications-plot-model-complexity-influence-py)
[Out-of-core classification of text documents](../../auto_examples/applications/plot_out_of_core_classification#sphx-glr-auto-examples-applications-plot-out-of-core-classification-py)
[Comparing various online solvers](../../auto_examples/linear_model/plot_sgd_comparison#sphx-glr-auto-examples-linear-model-plot-sgd-comparison-py)
[Early stopping of Stochastic Gradient Descent](../../auto_examples/linear_model/plot_sgd_early_stopping#sphx-glr-auto-examples-linear-model-plot-sgd-early-stopping-py)
[Plot multi-class SGD on the iris dataset](../../auto_examples/linear_model/plot_sgd_iris#sphx-glr-auto-examples-linear-model-plot-sgd-iris-py)
[SGD: Maximum margin separating hyperplane](../../auto_examples/linear_model/plot_sgd_separating_hyperplane#sphx-glr-auto-examples-linear-model-plot-sgd-separating-hyperplane-py)
[SGD: Penalties](../../auto_examples/linear_model/plot_sgd_penalties#sphx-glr-auto-examples-linear-model-plot-sgd-penalties-py)
[SGD: Weighted samples](../../auto_examples/linear_model/plot_sgd_weighted_samples#sphx-glr-auto-examples-linear-model-plot-sgd-weighted-samples-py)
[SGD: convex loss functions](../../auto_examples/linear_model/plot_sgd_loss_functions#sphx-glr-auto-examples-linear-model-plot-sgd-loss-functions-py)
[Explicit feature map approximation for RBF kernels](../../auto_examples/miscellaneous/plot_kernel_approximation#sphx-glr-auto-examples-miscellaneous-plot-kernel-approximation-py)
[Comparing randomized search and grid search for hyperparameter estimation](../../auto_examples/model_selection/plot_randomized_search#sphx-glr-auto-examples-model-selection-plot-randomized-search-py)
[Sample pipeline for text feature extraction and evaluation](../../auto_examples/model_selection/grid_search_text_feature_extraction#sphx-glr-auto-examples-model-selection-grid-search-text-feature-extraction-py)
[Semi-supervised Classification on a Text Dataset](../../auto_examples/semi_supervised/plot_semi_supervised_newsgroups#sphx-glr-auto-examples-semi-supervised-plot-semi-supervised-newsgroups-py)
[Classification of text documents using sparse features](../../auto_examples/text/plot_document_classification_20newsgroups#sphx-glr-auto-examples-text-plot-document-classification-20newsgroups-py)
| programming_docs |
scikit_learn sklearn.datasets.load_breast_cancer sklearn.datasets.load\_breast\_cancer
=====================================
sklearn.datasets.load\_breast\_cancer(*\**, *return\_X\_y=False*, *as\_frame=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/datasets/_base.py#L672)
Load and return the breast cancer wisconsin dataset (classification).
The breast cancer dataset is a classic and very easy binary classification dataset.
| | |
| --- | --- |
| Classes | 2 |
| Samples per class | 212(M),357(B) |
| Samples total | 569 |
| Dimensionality | 30 |
| Features | real, positive |
The copy of UCI ML Breast Cancer Wisconsin (Diagnostic) dataset is downloaded from: <https://goo.gl/U2Uwz2>
Read more in the [User Guide](https://scikit-learn.org/1.1/datasets/toy_dataset.html#breast-cancer-dataset).
Parameters:
**return\_X\_y**bool, default=False
If True, returns `(data, target)` instead of a Bunch object. See below for more information about the `data` and `target` object.
New in version 0.18.
**as\_frame**bool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric). The target is a pandas DataFrame or Series depending on the number of target columns. If `return_X_y` is True, then (`data`, `target`) will be pandas DataFrames or Series as described below.
New in version 0.23.
Returns:
**data**[`Bunch`](sklearn.utils.bunch#sklearn.utils.Bunch "sklearn.utils.Bunch")
Dictionary-like object, with the following attributes.
data{ndarray, dataframe} of shape (569, 30)
The data matrix. If `as_frame=True`, `data` will be a pandas DataFrame.
target{ndarray, Series} of shape (569,)
The classification target. If `as_frame=True`, `target` will be a pandas Series.
feature\_nameslist
The names of the dataset columns.
target\_nameslist
The names of target classes.
frameDataFrame of shape (569, 31)
Only present when `as_frame=True`. DataFrame with `data` and `target`.
New in version 0.23.
DESCRstr
The full description of the dataset.
filenamestr
The path to the location of the data.
New in version 0.20.
**(data, target)**tuple if `return_X_y` is True
A tuple of two ndarrays by default. The first contains a 2D ndarray of shape (569, 30) with each row representing one sample and each column representing the features. The second ndarray of shape (569,) contains the target samples. If `as_frame=True`, both arrays are pandas objects, i.e. `X` a dataframe and `y` a series.
New in version 0.18.
#### Examples
Let’s say you are interested in the samples 10, 50, and 85, and want to know their class name.
```
>>> from sklearn.datasets import load_breast_cancer
>>> data = load_breast_cancer()
>>> data.target[[10, 50, 85]]
array([0, 1, 0])
>>> list(data.target_names)
['malignant', 'benign']
```
Examples using `sklearn.datasets.load_breast_cancer`
----------------------------------------------------
[Post pruning decision trees with cost complexity pruning](../../auto_examples/tree/plot_cost_complexity_pruning#sphx-glr-auto-examples-tree-plot-cost-complexity-pruning-py)
[Permutation Importance with Multicollinear or Correlated Features](../../auto_examples/inspection/plot_permutation_importance_multicollinear#sphx-glr-auto-examples-inspection-plot-permutation-importance-multicollinear-py)
[Effect of varying threshold for self-training](../../auto_examples/semi_supervised/plot_self_training_varying_threshold#sphx-glr-auto-examples-semi-supervised-plot-self-training-varying-threshold-py)
scikit_learn sklearn.metrics.pairwise.paired_euclidean_distances sklearn.metrics.pairwise.paired\_euclidean\_distances
=====================================================
sklearn.metrics.pairwise.paired\_euclidean\_distances(*X*, *Y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/pairwise.py#L996)
Compute the paired euclidean distances between X and Y.
Read more in the [User Guide](../metrics#metrics).
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input array/matrix X.
**Y**array-like of shape (n\_samples, n\_features)
Input array/matrix Y.
Returns:
**distances**ndarray of shape (n\_samples,)
Output array/matrix containing the calculated paired euclidean distances.
scikit_learn sklearn.svm.NuSVC sklearn.svm.NuSVC
=================
*class*sklearn.svm.NuSVC(*\**, *nu=0.5*, *kernel='rbf'*, *degree=3*, *gamma='scale'*, *coef0=0.0*, *shrinking=True*, *probability=False*, *tol=0.001*, *cache\_size=200*, *class\_weight=None*, *verbose=False*, *max\_iter=-1*, *decision\_function\_shape='ovr'*, *break\_ties=False*, *random\_state=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/svm/_classes.py#L798)
Nu-Support Vector Classification.
Similar to SVC but uses a parameter to control the number of support vectors.
The implementation is based on libsvm.
Read more in the [User Guide](../svm#svm-classification).
Parameters:
**nu**float, default=0.5
An upper bound on the fraction of margin errors (see [User Guide](../svm#nu-svc)) and a lower bound of the fraction of support vectors. Should be in the interval (0, 1].
**kernel**{‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’} or callable, default=’rbf’
Specifies the kernel type to be used in the algorithm. If none is given, ‘rbf’ will be used. If a callable is given it is used to precompute the kernel matrix.
**degree**int, default=3
Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels.
**gamma**{‘scale’, ‘auto’} or float, default=’scale’
Kernel coefficient for ‘rbf’, ‘poly’ and ‘sigmoid’.
* if `gamma='scale'` (default) is passed then it uses 1 / (n\_features \* X.var()) as value of gamma,
* if ‘auto’, uses 1 / n\_features.
Changed in version 0.22: The default value of `gamma` changed from ‘auto’ to ‘scale’.
**coef0**float, default=0.0
Independent term in kernel function. It is only significant in ‘poly’ and ‘sigmoid’.
**shrinking**bool, default=True
Whether to use the shrinking heuristic. See the [User Guide](../svm#shrinking-svm).
**probability**bool, default=False
Whether to enable probability estimates. This must be enabled prior to calling `fit`, will slow down that method as it internally uses 5-fold cross-validation, and `predict_proba` may be inconsistent with `predict`. Read more in the [User Guide](../svm#scores-probabilities).
**tol**float, default=1e-3
Tolerance for stopping criterion.
**cache\_size**float, default=200
Specify the size of the kernel cache (in MB).
**class\_weight**{dict, ‘balanced’}, default=None
Set the parameter C of class i to class\_weight[i]\*C for SVC. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies as `n_samples / (n_classes * np.bincount(y))`.
**verbose**bool, default=False
Enable verbose output. Note that this setting takes advantage of a per-process runtime setting in libsvm that, if enabled, may not work properly in a multithreaded context.
**max\_iter**int, default=-1
Hard limit on iterations within solver, or -1 for no limit.
**decision\_function\_shape**{‘ovo’, ‘ovr’}, default=’ovr’
Whether to return a one-vs-rest (‘ovr’) decision function of shape (n\_samples, n\_classes) as all other classifiers, or the original one-vs-one (‘ovo’) decision function of libsvm which has shape (n\_samples, n\_classes \* (n\_classes - 1) / 2). However, one-vs-one (‘ovo’) is always used as multi-class strategy. The parameter is ignored for binary classification.
Changed in version 0.19: decision\_function\_shape is ‘ovr’ by default.
New in version 0.17: *decision\_function\_shape=’ovr’* is recommended.
Changed in version 0.17: Deprecated *decision\_function\_shape=’ovo’ and None*.
**break\_ties**bool, default=False
If true, `decision_function_shape='ovr'`, and number of classes > 2, [predict](https://scikit-learn.org/1.1/glossary.html#term-predict) will break ties according to the confidence values of [decision\_function](https://scikit-learn.org/1.1/glossary.html#term-decision_function); otherwise the first class among the tied classes is returned. Please note that breaking ties comes at a relatively high computational cost compared to a simple predict.
New in version 0.22.
**random\_state**int, RandomState instance or None, default=None
Controls the pseudo random number generation for shuffling the data for probability estimates. Ignored when `probability` is False. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
Attributes:
**class\_weight\_**ndarray of shape (n\_classes,)
Multipliers of parameter C of each class. Computed based on the `class_weight` parameter.
**classes\_**ndarray of shape (n\_classes,)
The unique classes labels.
[`coef_`](#sklearn.svm.NuSVC.coef_ "sklearn.svm.NuSVC.coef_")ndarray of shape (n\_classes \* (n\_classes -1) / 2, n\_features)
Weights assigned to the features when `kernel="linear"`.
**dual\_coef\_**ndarray of shape (n\_classes - 1, n\_SV)
Dual coefficients of the support vector in the decision function (see [Mathematical formulation](../sgd#sgd-mathematical-formulation)), multiplied by their targets. For multiclass, coefficient for all 1-vs-1 classifiers. The layout of the coefficients in the multiclass case is somewhat non-trivial. See the [multi-class section of the User Guide](../svm#svm-multi-class) for details.
**fit\_status\_**int
0 if correctly fitted, 1 if the algorithm did not converge.
**intercept\_**ndarray of shape (n\_classes \* (n\_classes - 1) / 2,)
Constants in decision function.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
**n\_iter\_**ndarray of shape (n\_classes \* (n\_classes - 1) // 2,)
Number of iterations run by the optimization routine to fit the model. The shape of this attribute depends on the number of models optimized which in turn depends on the number of classes.
New in version 1.1.
**support\_**ndarray of shape (n\_SV,)
Indices of support vectors.
**support\_vectors\_**ndarray of shape (n\_SV, n\_features)
Support vectors.
[`n_support_`](#sklearn.svm.NuSVC.n_support_ "sklearn.svm.NuSVC.n_support_")ndarray of shape (n\_classes,), dtype=int32
Number of support vectors for each class.
**fit\_status\_**int
0 if correctly fitted, 1 if the algorithm did not converge.
[`probA_`](#sklearn.svm.NuSVC.probA_ "sklearn.svm.NuSVC.probA_")ndarray of shape (n\_classes \* (n\_classes - 1) / 2,)
Parameter learned in Platt scaling when `probability=True`.
[`probB_`](#sklearn.svm.NuSVC.probB_ "sklearn.svm.NuSVC.probB_")ndarray of shape (n\_classes \* (n\_classes - 1) / 2,)
Parameter learned in Platt scaling when `probability=True`.
**shape\_fit\_**tuple of int of shape (n\_dimensions\_of\_X,)
Array dimensions of training vector `X`.
See also
[`SVC`](sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC")
Support Vector Machine for classification using libsvm.
[`LinearSVC`](sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC")
Scalable linear Support Vector Machine for classification using liblinear.
#### References
[1] [LIBSVM: A Library for Support Vector Machines](http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf)
[2] [Platt, John (1999). “Probabilistic outputs for support vector machines and comparison to regularizedlikelihood methods.”](http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.1639)
#### Examples
```
>>> import numpy as np
>>> X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]])
>>> y = np.array([1, 1, 2, 2])
>>> from sklearn.pipeline import make_pipeline
>>> from sklearn.preprocessing import StandardScaler
>>> from sklearn.svm import NuSVC
>>> clf = make_pipeline(StandardScaler(), NuSVC())
>>> clf.fit(X, y)
Pipeline(steps=[('standardscaler', StandardScaler()), ('nusvc', NuSVC())])
>>> print(clf.predict([[-0.8, -1]]))
[1]
```
#### Methods
| | |
| --- | --- |
| [`decision_function`](#sklearn.svm.NuSVC.decision_function "sklearn.svm.NuSVC.decision_function")(X) | Evaluate the decision function for the samples in X. |
| [`fit`](#sklearn.svm.NuSVC.fit "sklearn.svm.NuSVC.fit")(X, y[, sample\_weight]) | Fit the SVM model according to the given training data. |
| [`get_params`](#sklearn.svm.NuSVC.get_params "sklearn.svm.NuSVC.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.svm.NuSVC.predict "sklearn.svm.NuSVC.predict")(X) | Perform classification on samples in X. |
| [`predict_log_proba`](#sklearn.svm.NuSVC.predict_log_proba "sklearn.svm.NuSVC.predict_log_proba")(X) | Compute log probabilities of possible outcomes for samples in X. |
| [`predict_proba`](#sklearn.svm.NuSVC.predict_proba "sklearn.svm.NuSVC.predict_proba")(X) | Compute probabilities of possible outcomes for samples in X. |
| [`score`](#sklearn.svm.NuSVC.score "sklearn.svm.NuSVC.score")(X, y[, sample\_weight]) | Return the mean accuracy on the given test data and labels. |
| [`set_params`](#sklearn.svm.NuSVC.set_params "sklearn.svm.NuSVC.set_params")(\*\*params) | Set the parameters of this estimator. |
*property*coef\_
Weights assigned to the features when `kernel="linear"`.
Returns:
ndarray of shape (n\_features, n\_classes)
decision\_function(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/svm/_base.py#L748)
Evaluate the decision function for the samples in X.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The input samples.
Returns:
**X**ndarray of shape (n\_samples, n\_classes \* (n\_classes-1) / 2)
Returns the decision function of the sample for each class in the model. If decision\_function\_shape=’ovr’, the shape is (n\_samples, n\_classes).
#### Notes
If decision\_function\_shape=’ovo’, the function values are proportional to the distance of the samples X to the separating hyperplane. If the exact distances are required, divide the function values by the norm of the weight vector (`coef_`). See also [this question](https://stats.stackexchange.com/questions/14876/interpreting-distance-from-hyperplane-in-svm) for further details. If decision\_function\_shape=’ovr’, the decision function is a monotonic transformation of ovo decision function.
fit(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/svm/_base.py#L122)
Fit the SVM model according to the given training data.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features) or (n\_samples, n\_samples)
Training vectors, where `n_samples` is the number of samples and `n_features` is the number of features. For kernel=”precomputed”, the expected shape of X is (n\_samples, n\_samples).
**y**array-like of shape (n\_samples,)
Target values (class labels in classification, real numbers in regression).
**sample\_weight**array-like of shape (n\_samples,), default=None
Per-sample weights. Rescale C per sample. Higher weights force the classifier to put more emphasis on these points.
Returns:
**self**object
Fitted estimator.
#### Notes
If X and y are not C-ordered and contiguous arrays of np.float64 and X is not a scipy.sparse.csr\_matrix, X and/or y may be copied.
If X is a dense array, then the other methods will not support sparse matrices as input.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
*property*n\_support\_
Number of support vectors for each class.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/svm/_base.py#L780)
Perform classification on samples in X.
For an one-class model, +1 or -1 is returned.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features) or (n\_samples\_test, n\_samples\_train)
For kernel=”precomputed”, the expected shape of X is (n\_samples\_test, n\_samples\_train).
Returns:
**y\_pred**ndarray of shape (n\_samples,)
Class labels for samples in X.
predict\_log\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/svm/_base.py#L863)
Compute log probabilities of possible outcomes for samples in X.
The model need to have probability information computed at training time: fit with attribute `probability` set to True.
Parameters:
**X**array-like of shape (n\_samples, n\_features) or (n\_samples\_test, n\_samples\_train)
For kernel=”precomputed”, the expected shape of X is (n\_samples\_test, n\_samples\_train).
Returns:
**T**ndarray of shape (n\_samples, n\_classes)
Returns the log-probabilities of the sample for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute [classes\_](https://scikit-learn.org/1.1/glossary.html#term-classes_).
#### Notes
The probability model is created using cross validation, so the results can be slightly different than those obtained by predict. Also, it will produce meaningless results on very small datasets.
predict\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/svm/_base.py#L826)
Compute probabilities of possible outcomes for samples in X.
The model need to have probability information computed at training time: fit with attribute `probability` set to True.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
For kernel=”precomputed”, the expected shape of X is (n\_samples\_test, n\_samples\_train).
Returns:
**T**ndarray of shape (n\_samples, n\_classes)
Returns the probability of the sample for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute [classes\_](https://scikit-learn.org/1.1/glossary.html#term-classes_).
#### Notes
The probability model is created using cross validation, so the results can be slightly different than those obtained by predict. Also, it will produce meaningless results on very small datasets.
*property*probA\_
Parameter learned in Platt scaling when `probability=True`.
Returns:
ndarray of shape (n\_classes \* (n\_classes - 1) / 2)
*property*probB\_
Parameter learned in Platt scaling when `probability=True`.
Returns:
ndarray of shape (n\_classes \* (n\_classes - 1) / 2)
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L640)
Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True labels for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
Mean accuracy of `self.predict(X)` wrt. `y`.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.svm.NuSVC`
----------------------------------
[Non-linear SVM](../../auto_examples/svm/plot_svm_nonlinear#sphx-glr-auto-examples-svm-plot-svm-nonlinear-py)
| programming_docs |
scikit_learn sklearn.feature_selection.RFECV sklearn.feature\_selection.RFECV
================================
*class*sklearn.feature\_selection.RFECV(*estimator*, *\**, *step=1*, *min\_features\_to\_select=1*, *cv=None*, *scoring=None*, *verbose=0*, *n\_jobs=None*, *importance\_getter='auto'*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_rfe.py#L450)
Recursive feature elimination with cross-validation to select features.
See glossary entry for [cross-validation estimator](https://scikit-learn.org/1.1/glossary.html#term-cross-validation-estimator).
Read more in the [User Guide](../feature_selection#rfe).
Parameters:
**estimator**`Estimator` instance
A supervised learning estimator with a `fit` method that provides information about feature importance either through a `coef_` attribute or through a `feature_importances_` attribute.
**step**int or float, default=1
If greater than or equal to 1, then `step` corresponds to the (integer) number of features to remove at each iteration. If within (0.0, 1.0), then `step` corresponds to the percentage (rounded down) of features to remove at each iteration. Note that the last iteration may remove fewer than `step` features in order to reach `min_features_to_select`.
**min\_features\_to\_select**int, default=1
The minimum number of features to be selected. This number of features will always be scored, even if the difference between the original feature count and `min_features_to_select` isn’t divisible by `step`.
New in version 0.20.
**cv**int, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are:
* None, to use the default 5-fold cross-validation,
* integer, to specify the number of folds.
* [CV splitter](https://scikit-learn.org/1.1/glossary.html#term-CV-splitter),
* An iterable yielding (train, test) splits as arrays of indices.
For integer/None inputs, if `y` is binary or multiclass, [`StratifiedKFold`](sklearn.model_selection.stratifiedkfold#sklearn.model_selection.StratifiedKFold "sklearn.model_selection.StratifiedKFold") is used. If the estimator is a classifier or if `y` is neither binary nor multiclass, [`KFold`](sklearn.model_selection.kfold#sklearn.model_selection.KFold "sklearn.model_selection.KFold") is used.
Refer [User Guide](../cross_validation#cross-validation) for the various cross-validation strategies that can be used here.
Changed in version 0.22: `cv` default value of None changed from 3-fold to 5-fold.
**scoring**str, callable or None, default=None
A string (see model evaluation documentation) or a scorer callable object / function with signature `scorer(estimator, X, y)`.
**verbose**int, default=0
Controls verbosity of output.
**n\_jobs**int or None, default=None
Number of cores to run in parallel while fitting across folds. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
New in version 0.18.
**importance\_getter**str or callable, default=’auto’
If ‘auto’, uses the feature importance either through a `coef_` or `feature_importances_` attributes of estimator.
Also accepts a string that specifies an attribute name/path for extracting feature importance. For example, give `regressor_.coef_` in case of [`TransformedTargetRegressor`](sklearn.compose.transformedtargetregressor#sklearn.compose.TransformedTargetRegressor "sklearn.compose.TransformedTargetRegressor") or `named_steps.clf.feature_importances_` in case of [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline") with its last step named `clf`.
If `callable`, overrides the default feature importance getter. The callable is passed with the fitted estimator and it should return importance for each feature.
New in version 0.24.
Attributes:
[`classes_`](#sklearn.feature_selection.RFECV.classes_ "sklearn.feature_selection.RFECV.classes_")ndarray of shape (n\_classes,)
Classes labels available when `estimator` is a classifier.
**estimator\_**`Estimator` instance
The fitted estimator used to select features.
[`grid_scores_`](#sklearn.feature_selection.RFECV.grid_scores_ "sklearn.feature_selection.RFECV.grid_scores_")ndarray of shape (n\_subsets\_of\_features,)
DEPRECATED: The `grid_scores_` attribute is deprecated in version 1.0 in favor of `cv_results_` and will be removed in version 1.2.
**cv\_results\_**dict of ndarrays
A dict with keys:
split(k)\_test\_scorendarray of shape (n\_subsets\_of\_features,)
The cross-validation scores across (k)th fold.
mean\_test\_scorendarray of shape (n\_subsets\_of\_features,)
Mean of scores over the folds.
std\_test\_scorendarray of shape (n\_subsets\_of\_features,)
Standard deviation of scores over the folds.
New in version 1.0.
**n\_features\_**int
The number of selected features with cross-validation.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Only defined if the underlying estimator exposes such an attribute when fit.
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
**ranking\_**narray of shape (n\_features,)
The feature ranking, such that `ranking_[i]` corresponds to the ranking position of the i-th feature. Selected (i.e., estimated best) features are assigned rank 1.
**support\_**ndarray of shape (n\_features,)
The mask of selected features.
See also
[`RFE`](sklearn.feature_selection.rfe#sklearn.feature_selection.RFE "sklearn.feature_selection.RFE")
Recursive feature elimination.
#### Notes
The size of `grid_scores_` is equal to `ceil((n_features - min_features_to_select) / step) + 1`, where step is the number of features removed at each iteration.
Allows NaN/Inf in the input if the underlying estimator does as well.
#### References
[1] Guyon, I., Weston, J., Barnhill, S., & Vapnik, V., “Gene selection for cancer classification using support vector machines”, Mach. Learn., 46(1-3), 389–422, 2002.
#### Examples
The following example shows how to retrieve the a-priori not known 5 informative features in the Friedman #1 dataset.
```
>>> from sklearn.datasets import make_friedman1
>>> from sklearn.feature_selection import RFECV
>>> from sklearn.svm import SVR
>>> X, y = make_friedman1(n_samples=50, n_features=10, random_state=0)
>>> estimator = SVR(kernel="linear")
>>> selector = RFECV(estimator, step=1, cv=5)
>>> selector = selector.fit(X, y)
>>> selector.support_
array([ True, True, True, True, True, False, False, False, False,
False])
>>> selector.ranking_
array([1, 1, 1, 1, 1, 6, 4, 3, 2, 5])
```
#### Methods
| | |
| --- | --- |
| [`decision_function`](#sklearn.feature_selection.RFECV.decision_function "sklearn.feature_selection.RFECV.decision_function")(X) | Compute the decision function of `X`. |
| [`fit`](#sklearn.feature_selection.RFECV.fit "sklearn.feature_selection.RFECV.fit")(X, y[, groups]) | Fit the RFE model and automatically tune the number of selected features. |
| [`fit_transform`](#sklearn.feature_selection.RFECV.fit_transform "sklearn.feature_selection.RFECV.fit_transform")(X[, y]) | Fit to data, then transform it. |
| [`get_feature_names_out`](#sklearn.feature_selection.RFECV.get_feature_names_out "sklearn.feature_selection.RFECV.get_feature_names_out")([input\_features]) | Mask feature names according to selected features. |
| [`get_params`](#sklearn.feature_selection.RFECV.get_params "sklearn.feature_selection.RFECV.get_params")([deep]) | Get parameters for this estimator. |
| [`get_support`](#sklearn.feature_selection.RFECV.get_support "sklearn.feature_selection.RFECV.get_support")([indices]) | Get a mask, or integer index, of the features selected. |
| [`inverse_transform`](#sklearn.feature_selection.RFECV.inverse_transform "sklearn.feature_selection.RFECV.inverse_transform")(X) | Reverse the transformation operation. |
| [`predict`](#sklearn.feature_selection.RFECV.predict "sklearn.feature_selection.RFECV.predict")(X) | Reduce X to the selected features and predict using the estimator. |
| [`predict_log_proba`](#sklearn.feature_selection.RFECV.predict_log_proba "sklearn.feature_selection.RFECV.predict_log_proba")(X) | Predict class log-probabilities for X. |
| [`predict_proba`](#sklearn.feature_selection.RFECV.predict_proba "sklearn.feature_selection.RFECV.predict_proba")(X) | Predict class probabilities for X. |
| [`score`](#sklearn.feature_selection.RFECV.score "sklearn.feature_selection.RFECV.score")(X, y, \*\*fit\_params) | Reduce X to the selected features and return the score of the estimator. |
| [`set_params`](#sklearn.feature_selection.RFECV.set_params "sklearn.feature_selection.RFECV.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.feature_selection.RFECV.transform "sklearn.feature_selection.RFECV.transform")(X) | Reduce X to the selected features. |
*property*classes\_
Classes labels available when `estimator` is a classifier.
Returns:
ndarray of shape (n\_classes,)
decision\_function(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_rfe.py#L382)
Compute the decision function of `X`.
Parameters:
**X**{array-like or sparse matrix} of shape (n\_samples, n\_features)
The input samples. Internally, it will be converted to `dtype=np.float32` and if a sparse matrix is provided to a sparse `csr_matrix`.
Returns:
**score**array, shape = [n\_samples, n\_classes] or [n\_samples]
The decision function of the input samples. The order of the classes corresponds to that in the attribute [classes\_](https://scikit-learn.org/1.1/glossary.html#term-classes_). Regression and binary classification produce an array of shape [n\_samples].
fit(*X*, *y*, *groups=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_rfe.py#L648)
Fit the RFE model and automatically tune the number of selected features.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Training vector, where `n_samples` is the number of samples and `n_features` is the total number of features.
**y**array-like of shape (n\_samples,)
Target values (integers for classification, real numbers for regression).
**groups**array-like of shape (n\_samples,) or None, default=None
Group labels for the samples used while splitting the dataset into train/test set. Only used in conjunction with a “Group” [cv](https://scikit-learn.org/1.1/glossary.html#term-cv) instance (e.g., [`GroupKFold`](sklearn.model_selection.groupkfold#sklearn.model_selection.GroupKFold "sklearn.model_selection.GroupKFold")).
New in version 0.20.
Returns:
**self**object
Fitted estimator.
fit\_transform(*X*, *y=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L839)
Fit to data, then transform it.
Fits transformer to `X` and `y` with optional parameters `fit_params` and returns a transformed version of `X`.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs), default=None
Target values (None for unsupervised transformations).
**\*\*fit\_params**dict
Additional fit parameters.
Returns:
**X\_new**ndarray array of shape (n\_samples, n\_features\_new)
Transformed array.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_base.py#L146)
Mask feature names according to selected features.
Parameters:
**input\_features**array-like of str or None, default=None
Input features.
* If `input_features` is `None`, then `feature_names_in_` is used as feature names in. If `feature_names_in_` is not defined, then the following input feature names are generated: `["x0", "x1", ..., "x(n_features_in_ - 1)"]`.
* If `input_features` is an array-like, then `input_features` must match `feature_names_in_` if `feature_names_in_` is defined.
Returns:
**feature\_names\_out**ndarray of str objects
Transformed feature names.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
get\_support(*indices=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_base.py#L33)
Get a mask, or integer index, of the features selected.
Parameters:
**indices**bool, default=False
If True, the return value will be an array of integers, rather than a boolean mask.
Returns:
**support**array
An index that selects the retained features from a feature vector. If `indices` is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If `indices` is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector.
*property*grid\_scores\_
DEPRECATED: The `grid_scores_` attribute is deprecated in version 1.0 in favor of `cv_results_` and will be removed in version 1.2.
inverse\_transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_base.py#L106)
Reverse the transformation operation.
Parameters:
**X**array of shape [n\_samples, n\_selected\_features]
The input samples.
Returns:
**X\_r**array of shape [n\_samples, n\_original\_features]
`X` with columns of zeros inserted where features would have been removed by [`transform`](#sklearn.feature_selection.RFECV.transform "sklearn.feature_selection.RFECV.transform").
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_rfe.py#L334)
Reduce X to the selected features and predict using the estimator.
Parameters:
**X**array of shape [n\_samples, n\_features]
The input samples.
Returns:
**y**array of shape [n\_samples]
The predicted target values.
predict\_log\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_rfe.py#L424)
Predict class log-probabilities for X.
Parameters:
**X**array of shape [n\_samples, n\_features]
The input samples.
Returns:
**p**array of shape (n\_samples, n\_classes)
The class log-probabilities of the input samples. The order of the classes corresponds to that in the attribute [classes\_](https://scikit-learn.org/1.1/glossary.html#term-classes_).
predict\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_rfe.py#L404)
Predict class probabilities for X.
Parameters:
**X**{array-like or sparse matrix} of shape (n\_samples, n\_features)
The input samples. Internally, it will be converted to `dtype=np.float32` and if a sparse matrix is provided to a sparse `csr_matrix`.
Returns:
**p**array of shape (n\_samples, n\_classes)
The class probabilities of the input samples. The order of the classes corresponds to that in the attribute [classes\_](https://scikit-learn.org/1.1/glossary.html#term-classes_).
score(*X*, *y*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_rfe.py#L351)
Reduce X to the selected features and return the score of the estimator.
Parameters:
**X**array of shape [n\_samples, n\_features]
The input samples.
**y**array of shape [n\_samples]
The target values.
**\*\*fit\_params**dict
Parameters to pass to the `score` method of the underlying estimator.
New in version 1.0.
Returns:
**score**float
Score of the underlying base estimator computed with the selected features returned by `rfe.transform(X)` and `y`.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_base.py#L68)
Reduce X to the selected features.
Parameters:
**X**array of shape [n\_samples, n\_features]
The input samples.
Returns:
**X\_r**array of shape [n\_samples, n\_selected\_features]
The input samples with only the selected features.
Examples using `sklearn.feature_selection.RFECV`
------------------------------------------------
[Recursive feature elimination with cross-validation](../../auto_examples/feature_selection/plot_rfe_with_cross_validation#sphx-glr-auto-examples-feature-selection-plot-rfe-with-cross-validation-py)
scikit_learn sklearn.cluster.k_means sklearn.cluster.k\_means
========================
sklearn.cluster.k\_means(*X*, *n\_clusters*, *\**, *sample\_weight=None*, *init='k-means++'*, *n\_init=10*, *max\_iter=300*, *verbose=False*, *tol=0.0001*, *random\_state=None*, *copy\_x=True*, *algorithm='lloyd'*, *return\_n\_iter=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_kmeans.py#L263)
Perform K-means clustering algorithm.
Read more in the [User Guide](../clustering#k-means).
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The observations to cluster. It must be noted that the data will be converted to C ordering, which will cause a memory copy if the given data is not C-contiguous.
**n\_clusters**int
The number of clusters to form as well as the number of centroids to generate.
**sample\_weight**array-like of shape (n\_samples,), default=None
The weights for each observation in `X`. If `None`, all observations are assigned equal weight.
**init**{‘k-means++’, ‘random’}, callable or array-like of shape (n\_clusters, n\_features), default=’k-means++’
Method for initialization:
* `'k-means++'` : selects initial cluster centers for k-mean clustering in a smart way to speed up convergence. See section Notes in k\_init for more details.
* `'random'`: choose `n_clusters` observations (rows) at random from data for the initial centroids.
* If an array is passed, it should be of shape `(n_clusters, n_features)` and gives the initial centers.
* If a callable is passed, it should take arguments `X`, `n_clusters` and a random state and return an initialization.
**n\_init**int, default=10
Number of time the k-means algorithm will be run with different centroid seeds. The final results will be the best output of `n_init` consecutive runs in terms of inertia.
**max\_iter**int, default=300
Maximum number of iterations of the k-means algorithm to run.
**verbose**bool, default=False
Verbosity mode.
**tol**float, default=1e-4
Relative tolerance with regards to Frobenius norm of the difference in the cluster centers of two consecutive iterations to declare convergence.
**random\_state**int, RandomState instance or None, default=None
Determines random number generation for centroid initialization. Use an int to make the randomness deterministic. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**copy\_x**bool, default=True
When pre-computing distances it is more numerically accurate to center the data first. If `copy_x` is True (default), then the original data is not modified. If False, the original data is modified, and put back before the function returns, but small numerical differences may be introduced by subtracting and then adding the data mean. Note that if the original data is not C-contiguous, a copy will be made even if `copy_x` is False. If the original data is sparse, but not in CSR format, a copy will be made even if `copy_x` is False.
**algorithm**{“lloyd”, “elkan”, “auto”, “full”}, default=”lloyd”
K-means algorithm to use. The classical EM-style algorithm is `"lloyd"`. The `"elkan"` variation can be more efficient on some datasets with well-defined clusters, by using the triangle inequality. However it’s more memory intensive due to the allocation of an extra array of shape `(n_samples, n_clusters)`.
`"auto"` and `"full"` are deprecated and they will be removed in Scikit-Learn 1.3. They are both aliases for `"lloyd"`.
Changed in version 0.18: Added Elkan algorithm
Changed in version 1.1: Renamed “full” to “lloyd”, and deprecated “auto” and “full”. Changed “auto” to use “lloyd” instead of “elkan”.
**return\_n\_iter**bool, default=False
Whether or not to return the number of iterations.
Returns:
**centroid**ndarray of shape (n\_clusters, n\_features)
Centroids found at the last iteration of k-means.
**label**ndarray of shape (n\_samples,)
The `label[i]` is the code or index of the centroid the i’th observation is closest to.
**inertia**float
The final value of the inertia criterion (sum of squared distances to the closest centroid for all observations in the training set).
**best\_n\_iter**int
Number of iterations corresponding to the best results. Returned only if `return_n_iter` is set to True.
| programming_docs |
scikit_learn sklearn.feature_extraction.text.HashingVectorizer sklearn.feature\_extraction.text.HashingVectorizer
==================================================
*class*sklearn.feature\_extraction.text.HashingVectorizer(*\**, *input='content'*, *encoding='utf-8'*, *decode\_error='strict'*, *strip\_accents=None*, *lowercase=True*, *preprocessor=None*, *tokenizer=None*, *stop\_words=None*, *token\_pattern='(?u)\\b\\w\\w+\\b'*, *ngram\_range=(1*, *1)*, *analyzer='word'*, *n\_features=1048576*, *binary=False*, *norm='l2'*, *alternate\_sign=True*, *dtype=<class 'numpy.float64'>*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_extraction/text.py#L565)
Convert a collection of text documents to a matrix of token occurrences.
It turns a collection of text documents into a scipy.sparse matrix holding token occurrence counts (or binary occurrence information), possibly normalized as token frequencies if norm=’l1’ or projected on the euclidean unit sphere if norm=’l2’.
This text vectorizer implementation uses the hashing trick to find the token string name to feature integer index mapping.
This strategy has several advantages:
* it is very low memory scalable to large datasets as there is no need to store a vocabulary dictionary in memory.
* it is fast to pickle and un-pickle as it holds no state besides the constructor parameters.
* it can be used in a streaming (partial fit) or parallel pipeline as there is no state computed during fit.
There are also a couple of cons (vs using a CountVectorizer with an in-memory vocabulary):
* there is no way to compute the inverse transform (from feature indices to string feature names) which can be a problem when trying to introspect which features are most important to a model.
* there can be collisions: distinct tokens can be mapped to the same feature index. However in practice this is rarely an issue if n\_features is large enough (e.g. 2 \*\* 18 for text classification problems).
* no IDF weighting as this would render the transformer stateful.
The hash function employed is the signed 32-bit version of Murmurhash3.
Read more in the [User Guide](../feature_extraction#text-feature-extraction).
Parameters:
**input**{‘filename’, ‘file’, ‘content’}, default=’content’
* If `'filename'`, the sequence passed as an argument to fit is expected to be a list of filenames that need reading to fetch the raw content to analyze.
* If `'file'`, the sequence items must have a ‘read’ method (file-like object) that is called to fetch the bytes in memory.
* If `'content'`, the input is expected to be a sequence of items that can be of type string or byte.
**encoding**str, default=’utf-8’
If bytes or files are given to analyze, this encoding is used to decode.
**decode\_error**{‘strict’, ‘ignore’, ‘replace’}, default=’strict’
Instruction on what to do if a byte sequence is given to analyze that contains characters not of the given `encoding`. By default, it is ‘strict’, meaning that a UnicodeDecodeError will be raised. Other values are ‘ignore’ and ‘replace’.
**strip\_accents**{‘ascii’, ‘unicode’}, default=None
Remove accents and perform other character normalization during the preprocessing step. ‘ascii’ is a fast method that only works on characters that have a direct ASCII mapping. ‘unicode’ is a slightly slower method that works on any characters. None (default) does nothing.
Both ‘ascii’ and ‘unicode’ use NFKD normalization from [`unicodedata.normalize`](https://docs.python.org/3/library/unicodedata.html#unicodedata.normalize "(in Python v3.10)").
**lowercase**bool, default=True
Convert all characters to lowercase before tokenizing.
**preprocessor**callable, default=None
Override the preprocessing (string transformation) stage while preserving the tokenizing and n-grams generation steps. Only applies if `analyzer` is not callable.
**tokenizer**callable, default=None
Override the string tokenization step while preserving the preprocessing and n-grams generation steps. Only applies if `analyzer == 'word'`.
**stop\_words**{‘english’}, list, default=None
If ‘english’, a built-in stop word list for English is used. There are several known issues with ‘english’ and you should consider an alternative (see [Using stop words](../feature_extraction#stop-words)).
If a list, that list is assumed to contain stop words, all of which will be removed from the resulting tokens. Only applies if `analyzer == 'word'`.
**token\_pattern**str, default=r”(?u)\b\w\w+\b”
Regular expression denoting what constitutes a “token”, only used if `analyzer == 'word'`. The default regexp selects tokens of 2 or more alphanumeric characters (punctuation is completely ignored and always treated as a token separator).
If there is a capturing group in token\_pattern then the captured group content, not the entire match, becomes the token. At most one capturing group is permitted.
**ngram\_range**tuple (min\_n, max\_n), default=(1, 1)
The lower and upper boundary of the range of n-values for different n-grams to be extracted. All values of n such that min\_n <= n <= max\_n will be used. For example an `ngram_range` of `(1, 1)` means only unigrams, `(1, 2)` means unigrams and bigrams, and `(2, 2)` means only bigrams. Only applies if `analyzer` is not callable.
**analyzer**{‘word’, ‘char’, ‘char\_wb’} or callable, default=’word’
Whether the feature should be made of word or character n-grams. Option ‘char\_wb’ creates character n-grams only from text inside word boundaries; n-grams at the edges of words are padded with space.
If a callable is passed it is used to extract the sequence of features out of the raw, unprocessed input.
Changed in version 0.21: Since v0.21, if `input` is `'filename'` or `'file'`, the data is first read from the file and then passed to the given callable analyzer.
**n\_features**int, default=(2 \*\* 20)
The number of features (columns) in the output matrices. Small numbers of features are likely to cause hash collisions, but large numbers will cause larger coefficient dimensions in linear learners.
**binary**bool, default=False
If True, all non zero counts are set to 1. This is useful for discrete probabilistic models that model binary events rather than integer counts.
**norm**{‘l1’, ‘l2’}, default=’l2’
Norm used to normalize term vectors. None for no normalization.
**alternate\_sign**bool, default=True
When True, an alternating sign is added to the features as to approximately conserve the inner product in the hashed space even for small n\_features. This approach is similar to sparse random projection.
New in version 0.19.
**dtype**type, default=np.float64
Type of the matrix returned by fit\_transform() or transform().
See also
[`CountVectorizer`](sklearn.feature_extraction.text.countvectorizer#sklearn.feature_extraction.text.CountVectorizer "sklearn.feature_extraction.text.CountVectorizer")
Convert a collection of text documents to a matrix of token counts.
[`TfidfVectorizer`](sklearn.feature_extraction.text.tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer "sklearn.feature_extraction.text.TfidfVectorizer")
Convert a collection of raw documents to a matrix of TF-IDF features.
#### Examples
```
>>> from sklearn.feature_extraction.text import HashingVectorizer
>>> corpus = [
... 'This is the first document.',
... 'This document is the second document.',
... 'And this is the third one.',
... 'Is this the first document?',
... ]
>>> vectorizer = HashingVectorizer(n_features=2**4)
>>> X = vectorizer.fit_transform(corpus)
>>> print(X.shape)
(4, 16)
```
#### Methods
| | |
| --- | --- |
| [`build_analyzer`](#sklearn.feature_extraction.text.HashingVectorizer.build_analyzer "sklearn.feature_extraction.text.HashingVectorizer.build_analyzer")() | Return a callable to process input data. |
| [`build_preprocessor`](#sklearn.feature_extraction.text.HashingVectorizer.build_preprocessor "sklearn.feature_extraction.text.HashingVectorizer.build_preprocessor")() | Return a function to preprocess the text before tokenization. |
| [`build_tokenizer`](#sklearn.feature_extraction.text.HashingVectorizer.build_tokenizer "sklearn.feature_extraction.text.HashingVectorizer.build_tokenizer")() | Return a function that splits a string into a sequence of tokens. |
| [`decode`](#sklearn.feature_extraction.text.HashingVectorizer.decode "sklearn.feature_extraction.text.HashingVectorizer.decode")(doc) | Decode the input into a string of unicode symbols. |
| [`fit`](#sklearn.feature_extraction.text.HashingVectorizer.fit "sklearn.feature_extraction.text.HashingVectorizer.fit")(X[, y]) | No-op: this transformer is stateless. |
| [`fit_transform`](#sklearn.feature_extraction.text.HashingVectorizer.fit_transform "sklearn.feature_extraction.text.HashingVectorizer.fit_transform")(X[, y]) | Transform a sequence of documents to a document-term matrix. |
| [`get_params`](#sklearn.feature_extraction.text.HashingVectorizer.get_params "sklearn.feature_extraction.text.HashingVectorizer.get_params")([deep]) | Get parameters for this estimator. |
| [`get_stop_words`](#sklearn.feature_extraction.text.HashingVectorizer.get_stop_words "sklearn.feature_extraction.text.HashingVectorizer.get_stop_words")() | Build or fetch the effective stop words list. |
| [`partial_fit`](#sklearn.feature_extraction.text.HashingVectorizer.partial_fit "sklearn.feature_extraction.text.HashingVectorizer.partial_fit")(X[, y]) | No-op: this transformer is stateless. |
| [`set_params`](#sklearn.feature_extraction.text.HashingVectorizer.set_params "sklearn.feature_extraction.text.HashingVectorizer.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.feature_extraction.text.HashingVectorizer.transform "sklearn.feature_extraction.text.HashingVectorizer.transform")(X) | Transform a sequence of documents to a document-term matrix. |
build\_analyzer()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_extraction/text.py#L418)
Return a callable to process input data.
The callable handles that handles preprocessing, tokenization, and n-grams generation.
Returns:
analyzer: callable
A function to handle preprocessing, tokenization and n-grams generation.
build\_preprocessor()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_extraction/text.py#L321)
Return a function to preprocess the text before tokenization.
Returns:
preprocessor: callable
A function to preprocess the text before tokenization.
build\_tokenizer()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_extraction/text.py#L348)
Return a function that splits a string into a sequence of tokens.
Returns:
tokenizer: callable
A function to split a string into a sequence of tokens.
decode(*doc*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_extraction/text.py#L208)
Decode the input into a string of unicode symbols.
The decoding strategy depends on the vectorizer parameters.
Parameters:
**doc**bytes or str
The string to decode.
Returns:
doc: str
A string of unicode symbols.
fit(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_extraction/text.py#L794)
No-op: this transformer is stateless.
Parameters:
**X**ndarray of shape [n\_samples, n\_features]
Training data.
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**self**object
HashingVectorizer instance.
fit\_transform(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_extraction/text.py#L852)
Transform a sequence of documents to a document-term matrix.
Parameters:
**X**iterable over raw text documents, length = n\_samples
Samples. Each sample must be a text document (either bytes or unicode strings, file name or file object depending on the constructor argument) which will be tokenized and hashed.
**y**any
Ignored. This parameter exists only for compatibility with sklearn.pipeline.Pipeline.
Returns:
**X**sparse matrix of shape (n\_samples, n\_features)
Document-term matrix.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
get\_stop\_words()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_extraction/text.py#L368)
Build or fetch the effective stop words list.
Returns:
stop\_words: list or None
A list of stop words.
partial\_fit(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_extraction/text.py#L773)
No-op: this transformer is stateless.
This method is just there to mark the fact that this transformer can work in a streaming setup.
Parameters:
**X**ndarray of shape [n\_samples, n\_features]
Training data.
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**self**object
HashingVectorizer instance.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_extraction/text.py#L822)
Transform a sequence of documents to a document-term matrix.
Parameters:
**X**iterable over raw text documents, length = n\_samples
Samples. Each sample must be a text document (either bytes or unicode strings, file name or file object depending on the constructor argument) which will be tokenized and hashed.
Returns:
**X**sparse matrix of shape (n\_samples, n\_features)
Document-term matrix.
Examples using `sklearn.feature_extraction.text.HashingVectorizer`
------------------------------------------------------------------
[Out-of-core classification of text documents](../../auto_examples/applications/plot_out_of_core_classification#sphx-glr-auto-examples-applications-plot-out-of-core-classification-py)
[Clustering text documents using k-means](../../auto_examples/text/plot_document_clustering#sphx-glr-auto-examples-text-plot-document-clustering-py)
[FeatureHasher and DictVectorizer Comparison](../../auto_examples/text/plot_hashing_vs_dict_vectorizer#sphx-glr-auto-examples-text-plot-hashing-vs-dict-vectorizer-py)
scikit_learn sklearn.base.is_classifier sklearn.base.is\_classifier
===========================
sklearn.base.is\_classifier(*estimator*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L1001)
Return True if the given estimator is (probably) a classifier.
Parameters:
**estimator**object
Estimator object to test.
Returns:
**out**bool
True if estimator is a classifier and False otherwise.
scikit_learn sklearn.linear_model.Lasso sklearn.linear\_model.Lasso
===========================
*class*sklearn.linear\_model.Lasso(*alpha=1.0*, *\**, *fit\_intercept=True*, *normalize='deprecated'*, *precompute=False*, *copy\_X=True*, *max\_iter=1000*, *tol=0.0001*, *warm\_start=False*, *positive=False*, *random\_state=None*, *selection='cyclic'*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_coordinate_descent.py#L1134)
Linear Model trained with L1 prior as regularizer (aka the Lasso).
The optimization objective for Lasso is:
```
(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
```
Technically the Lasso model is optimizing the same objective function as the Elastic Net with `l1_ratio=1.0` (no L2 penalty).
Read more in the [User Guide](../linear_model#lasso).
Parameters:
**alpha**float, default=1.0
Constant that multiplies the L1 term, controlling regularization strength. `alpha` must be a non-negative float i.e. in `[0, inf)`.
When `alpha = 0`, the objective is equivalent to ordinary least squares, solved by the [`LinearRegression`](sklearn.linear_model.linearregression#sklearn.linear_model.LinearRegression "sklearn.linear_model.LinearRegression") object. For numerical reasons, using `alpha = 0` with the `Lasso` object is not advised. Instead, you should use the [`LinearRegression`](sklearn.linear_model.linearregression#sklearn.linear_model.LinearRegression "sklearn.linear_model.LinearRegression") object.
**fit\_intercept**bool, default=True
Whether to calculate the intercept for this model. If set to False, no intercept will be used in calculations (i.e. data is expected to be centered).
**normalize**bool, default=False
This parameter is ignored when `fit_intercept` is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use [`StandardScaler`](sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler") before calling `fit` on an estimator with `normalize=False`.
Deprecated since version 1.0: `normalize` was deprecated in version 1.0 and will be removed in 1.2.
**precompute**bool or array-like of shape (n\_features, n\_features), default=False
Whether to use a precomputed Gram matrix to speed up calculations. The Gram matrix can also be passed as argument. For sparse input this option is always `False` to preserve sparsity.
**copy\_X**bool, default=True
If `True`, X will be copied; else, it may be overwritten.
**max\_iter**int, default=1000
The maximum number of iterations.
**tol**float, default=1e-4
The tolerance for the optimization: if the updates are smaller than `tol`, the optimization code checks the dual gap for optimality and continues until it is smaller than `tol`, see Notes below.
**warm\_start**bool, default=False
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See [the Glossary](https://scikit-learn.org/1.1/glossary.html#term-warm_start).
**positive**bool, default=False
When set to `True`, forces the coefficients to be positive.
**random\_state**int, RandomState instance, default=None
The seed of the pseudo random number generator that selects a random feature to update. Used when `selection` == ‘random’. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**selection**{‘cyclic’, ‘random’}, default=’cyclic’
If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4.
Attributes:
**coef\_**ndarray of shape (n\_features,) or (n\_targets, n\_features)
Parameter vector (w in the cost function formula).
**dual\_gap\_**float or ndarray of shape (n\_targets,)
Given param alpha, the dual gaps at the end of the optimization, same shape as each observation of y.
[`sparse_coef_`](#sklearn.linear_model.Lasso.sparse_coef_ "sklearn.linear_model.Lasso.sparse_coef_")sparse matrix of shape (n\_features, 1) or (n\_targets, n\_features)
Sparse representation of the fitted `coef_`.
**intercept\_**float or ndarray of shape (n\_targets,)
Independent term in decision function.
**n\_iter\_**int or list of int
Number of iterations run by the coordinate descent solver to reach the specified tolerance.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`lars_path`](sklearn.linear_model.lars_path#sklearn.linear_model.lars_path "sklearn.linear_model.lars_path")
Regularization path using LARS.
[`lasso_path`](sklearn.linear_model.lasso_path#sklearn.linear_model.lasso_path "sklearn.linear_model.lasso_path")
Regularization path using Lasso.
[`LassoLars`](sklearn.linear_model.lassolars#sklearn.linear_model.LassoLars "sklearn.linear_model.LassoLars")
Lasso Path along the regularization parameter usingLARS algorithm.
[`LassoCV`](sklearn.linear_model.lassocv#sklearn.linear_model.LassoCV "sklearn.linear_model.LassoCV")
Lasso alpha parameter by cross-validation.
[`LassoLarsCV`](sklearn.linear_model.lassolarscv#sklearn.linear_model.LassoLarsCV "sklearn.linear_model.LassoLarsCV")
Lasso least angle parameter algorithm by cross-validation.
[`sklearn.decomposition.sparse_encode`](sklearn.decomposition.sparse_encode#sklearn.decomposition.sparse_encode "sklearn.decomposition.sparse_encode")
Sparse coding array estimator.
#### Notes
The algorithm used to fit the model is coordinate descent.
To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array.
Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to `1 / (2C)` in other linear models such as [`LogisticRegression`](sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression "sklearn.linear_model.LogisticRegression") or [`LinearSVC`](sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC"). If an array is passed, penalties are assumed to be specific to the targets. Hence they must correspond in number.
The precise stopping criteria based on `tol` are the following: First, check that that maximum coordinate update, i.e. \(\max\_j |w\_j^{new} - w\_j^{old}|\) is smaller than `tol` times the maximum absolute coefficient, \(\max\_j |w\_j|\). If so, then additionally check whether the dual gap is smaller than `tol` times \(||y||\_2^2 / n\_{ ext{samples}}\).
#### Examples
```
>>> from sklearn import linear_model
>>> clf = linear_model.Lasso(alpha=0.1)
>>> clf.fit([[0,0], [1, 1], [2, 2]], [0, 1, 2])
Lasso(alpha=0.1)
>>> print(clf.coef_)
[0.85 0. ]
>>> print(clf.intercept_)
0.15...
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.linear_model.Lasso.fit "sklearn.linear_model.Lasso.fit")(X, y[, sample\_weight, check\_input]) | Fit model with coordinate descent. |
| [`get_params`](#sklearn.linear_model.Lasso.get_params "sklearn.linear_model.Lasso.get_params")([deep]) | Get parameters for this estimator. |
| [`path`](#sklearn.linear_model.Lasso.path "sklearn.linear_model.Lasso.path")(X, y, \*[, l1\_ratio, eps, n\_alphas, ...]) | Compute elastic net path with coordinate descent. |
| [`predict`](#sklearn.linear_model.Lasso.predict "sklearn.linear_model.Lasso.predict")(X) | Predict using the linear model. |
| [`score`](#sklearn.linear_model.Lasso.score "sklearn.linear_model.Lasso.score")(X, y[, sample\_weight]) | Return the coefficient of determination of the prediction. |
| [`set_params`](#sklearn.linear_model.Lasso.set_params "sklearn.linear_model.Lasso.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y*, *sample\_weight=None*, *check\_input=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_coordinate_descent.py#L873)
Fit model with coordinate descent.
Parameters:
**X**{ndarray, sparse matrix} of (n\_samples, n\_features)
Data.
**y**{ndarray, sparse matrix} of shape (n\_samples,) or (n\_samples, n\_targets)
Target. Will be cast to X’s dtype if necessary.
**sample\_weight**float or array-like of shape (n\_samples,), default=None
Sample weights. Internally, the `sample_weight` vector will be rescaled to sum to `n_samples`.
New in version 0.23.
**check\_input**bool, default=True
Allow to bypass several input checking. Don’t use this parameter unless you know what you do.
Returns:
**self**object
Fitted estimator.
#### Notes
Coordinate descent is an algorithm that considers each column of data at a time hence it will automatically convert the X input as a Fortran-contiguous numpy array if necessary.
To avoid memory re-allocation it is advised to allocate the initial data in memory directly using that format.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
*static*path(*X*, *y*, *\**, *l1\_ratio=0.5*, *eps=0.001*, *n\_alphas=100*, *alphas=None*, *precompute='auto'*, *Xy=None*, *copy\_X=True*, *coef\_init=None*, *verbose=False*, *return\_n\_iter=False*, *positive=False*, *check\_input=True*, *\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_coordinate_descent.py#L366)
Compute elastic net path with coordinate descent.
The elastic net optimization function varies for mono and multi-outputs.
For mono-output tasks it is:
```
1 / (2 * n_samples) * ||y - Xw||^2_2
+ alpha * l1_ratio * ||w||_1
+ 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2
```
For multi-output tasks it is:
```
(1 / (2 * n_samples)) * ||Y - XW||_Fro^2
+ alpha * l1_ratio * ||W||_21
+ 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
```
Where:
```
||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
```
i.e. the sum of norm of each row.
Read more in the [User Guide](../linear_model#elastic-net).
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If `y` is mono-output then `X` can be sparse.
**y**{array-like, sparse matrix} of shape (n\_samples,) or (n\_samples, n\_targets)
Target values.
**l1\_ratio**float, default=0.5
Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). `l1_ratio=1` corresponds to the Lasso.
**eps**float, default=1e-3
Length of the path. `eps=1e-3` means that `alpha_min / alpha_max = 1e-3`.
**n\_alphas**int, default=100
Number of alphas along the regularization path.
**alphas**ndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically.
**precompute**‘auto’, bool or array-like of shape (n\_features, n\_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to `'auto'` let us decide. The Gram matrix can also be passed as argument.
**Xy**array-like of shape (n\_features,) or (n\_features, n\_targets), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
**copy\_X**bool, default=True
If `True`, X will be copied; else, it may be overwritten.
**coef\_init**ndarray of shape (n\_features, ), default=None
The initial values of the coefficients.
**verbose**bool or int, default=False
Amount of verbosity.
**return\_n\_iter**bool, default=False
Whether to return the number of iterations or not.
**positive**bool, default=False
If set to True, forces coefficients to be positive. (Only allowed when `y.ndim == 1`).
**check\_input**bool, default=True
If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller.
**\*\*params**kwargs
Keyword arguments passed to the coordinate descent solver.
Returns:
**alphas**ndarray of shape (n\_alphas,)
The alphas along the path where models are computed.
**coefs**ndarray of shape (n\_features, n\_alphas) or (n\_targets, n\_features, n\_alphas)
Coefficients along the path.
**dual\_gaps**ndarray of shape (n\_alphas,)
The dual gaps at the end of the optimization for each alpha.
**n\_iters**list of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when `return_n_iter` is set to True).
See also
[`MultiTaskElasticNet`](sklearn.linear_model.multitaskelasticnet#sklearn.linear_model.MultiTaskElasticNet "sklearn.linear_model.MultiTaskElasticNet")
Multi-task ElasticNet model trained with L1/L2 mixed-norm as regularizer.
[`MultiTaskElasticNetCV`](sklearn.linear_model.multitaskelasticnetcv#sklearn.linear_model.MultiTaskElasticNetCV "sklearn.linear_model.MultiTaskElasticNetCV")
Multi-task L1/L2 ElasticNet with built-in cross-validation.
[`ElasticNet`](sklearn.linear_model.elasticnet#sklearn.linear_model.ElasticNet "sklearn.linear_model.ElasticNet")
Linear regression with combined L1 and L2 priors as regularizer.
[`ElasticNetCV`](sklearn.linear_model.elasticnetcv#sklearn.linear_model.ElasticNetCV "sklearn.linear_model.ElasticNetCV")
Elastic Net model with iterative fitting along a regularization path.
#### Notes
For an example, see [examples/linear\_model/plot\_lasso\_coordinate\_descent\_path.py](../../auto_examples/linear_model/plot_lasso_coordinate_descent_path#sphx-glr-auto-examples-linear-model-plot-lasso-coordinate-descent-path-py).
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L372)
Predict using the linear model.
Parameters:
**X**array-like or sparse matrix, shape (n\_samples, n\_features)
Samples.
Returns:
**C**array, shape (n\_samples,)
Returns predicted values.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L677)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares `((y_true - y_pred)** 2).sum()` and \(v\) is the total sum of squares `((y_true - y_true.mean()) ** 2).sum()`. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of `y`, disregarding the input features, would get a \(R^2\) score of 0.0.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape `(n_samples, n_samples_fitted)`, where `n_samples_fitted` is the number of samples used in the fitting for the estimator.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True values for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
\(R^2\) of `self.predict(X)` wrt. `y`.
#### Notes
The \(R^2\) score used when calling `score` on a regressor uses `multioutput='uniform_average'` from version 0.23 to keep consistent with default value of [`r2_score`](sklearn.metrics.r2_score#sklearn.metrics.r2_score "sklearn.metrics.r2_score"). This influences the `score` method of all the multioutput regressors (except for [`MultiOutputRegressor`](sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor")).
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
*property*sparse\_coef\_
Sparse representation of the fitted `coef_`.
Examples using `sklearn.linear_model.Lasso`
-------------------------------------------
[Release Highlights for scikit-learn 0.23](../../auto_examples/release_highlights/plot_release_highlights_0_23_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-0-23-0-py)
[Compressive sensing: tomography reconstruction with L1 prior (Lasso)](../../auto_examples/applications/plot_tomography_l1_reconstruction#sphx-glr-auto-examples-applications-plot-tomography-l1-reconstruction-py)
[Joint feature selection with multi-task Lasso](../../auto_examples/linear_model/plot_multi_task_lasso_support#sphx-glr-auto-examples-linear-model-plot-multi-task-lasso-support-py)
[Lasso and Elastic Net for Sparse Signals](../../auto_examples/linear_model/plot_lasso_and_elasticnet#sphx-glr-auto-examples-linear-model-plot-lasso-and-elasticnet-py)
[Lasso on dense and sparse data](../../auto_examples/linear_model/plot_lasso_dense_vs_sparse_data#sphx-glr-auto-examples-linear-model-plot-lasso-dense-vs-sparse-data-py)
[Cross-validation on diabetes Dataset Exercise](../../auto_examples/exercises/plot_cv_diabetes#sphx-glr-auto-examples-exercises-plot-cv-diabetes-py)
| programming_docs |
scikit_learn sklearn.feature_selection.f_classif sklearn.feature\_selection.f\_classif
=====================================
sklearn.feature\_selection.f\_classif(*X*, *y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_univariate_selection.py#L118)
Compute the ANOVA F-value for the provided sample.
Read more in the [User Guide](../feature_selection#univariate-feature-selection).
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The set of regressors that will be tested sequentially.
**y**ndarray of shape (n\_samples,)
The target vector.
Returns:
**f\_statistic**ndarray of shape (n\_features,)
F-statistic for each feature.
**p\_values**ndarray of shape (n\_features,)
P-values associated with the F-statistic.
See also
[`chi2`](sklearn.feature_selection.chi2#sklearn.feature_selection.chi2 "sklearn.feature_selection.chi2")
Chi-squared stats of non-negative features for classification tasks.
[`f_regression`](sklearn.feature_selection.f_regression#sklearn.feature_selection.f_regression "sklearn.feature_selection.f_regression")
F-value between label/feature for regression tasks.
Examples using `sklearn.feature_selection.f_classif`
----------------------------------------------------
[Pipeline ANOVA SVM](../../auto_examples/feature_selection/plot_feature_selection_pipeline#sphx-glr-auto-examples-feature-selection-plot-feature-selection-pipeline-py)
[Univariate Feature Selection](../../auto_examples/feature_selection/plot_feature_selection#sphx-glr-auto-examples-feature-selection-plot-feature-selection-py)
scikit_learn sklearn.utils.check_random_state sklearn.utils.check\_random\_state
==================================
sklearn.utils.check\_random\_state(*seed*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/utils/validation.py#L1161)
Turn seed into a np.random.RandomState instance.
Parameters:
**seed**None, int or instance of RandomState
If seed is None, return the RandomState singleton used by np.random. If seed is an int, return a new RandomState instance seeded with seed. If seed is already a RandomState instance, return it. Otherwise raise ValueError.
Returns:
[`numpy.random.RandomState`](https://numpy.org/doc/stable/reference/random/legacy.html#numpy.random.RandomState "(in NumPy v1.23)")
The random state object based on `seed` parameter.
Examples using `sklearn.utils.check_random_state`
-------------------------------------------------
[Empirical evaluation of the impact of k-means initialization](../../auto_examples/cluster/plot_kmeans_stability_low_dim_dense#sphx-glr-auto-examples-cluster-plot-kmeans-stability-low-dim-dense-py)
[MNIST classification using multinomial logistic + L1](../../auto_examples/linear_model/plot_sparse_logistic_regression_mnist#sphx-glr-auto-examples-linear-model-plot-sparse-logistic-regression-mnist-py)
[Manifold Learning methods on a severed sphere](../../auto_examples/manifold/plot_manifold_sphere#sphx-glr-auto-examples-manifold-plot-manifold-sphere-py)
[Face completion with a multi-output estimators](../../auto_examples/miscellaneous/plot_multioutput_face_completion#sphx-glr-auto-examples-miscellaneous-plot-multioutput-face-completion-py)
[Isotonic Regression](../../auto_examples/miscellaneous/plot_isotonic_regression#sphx-glr-auto-examples-miscellaneous-plot-isotonic-regression-py)
[Scaling the regularization parameter for SVCs](../../auto_examples/svm/plot_svm_scale_c#sphx-glr-auto-examples-svm-plot-svm-scale-c-py)
scikit_learn sklearn.model_selection.LeavePGroupsOut sklearn.model\_selection.LeavePGroupsOut
========================================
*class*sklearn.model\_selection.LeavePGroupsOut(*n\_groups*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_split.py#L1231)
Leave P Group(s) Out cross-validator
Provides train/test indices to split data according to a third-party provided group. This group information can be used to encode arbitrary domain specific stratifications of the samples as integers.
For instance the groups could be the year of collection of the samples and thus allow for cross-validation against time-based splits.
The difference between LeavePGroupsOut and LeaveOneGroupOut is that the former builds the test sets with all the samples assigned to `p` different values of the groups while the latter uses samples all assigned the same groups.
Read more in the [User Guide](../cross_validation#leave-p-groups-out).
Parameters:
**n\_groups**int
Number of groups (`p`) to leave out in the test split.
See also
[`GroupKFold`](sklearn.model_selection.groupkfold#sklearn.model_selection.GroupKFold "sklearn.model_selection.GroupKFold")
K-fold iterator variant with non-overlapping groups.
#### Examples
```
>>> import numpy as np
>>> from sklearn.model_selection import LeavePGroupsOut
>>> X = np.array([[1, 2], [3, 4], [5, 6]])
>>> y = np.array([1, 2, 1])
>>> groups = np.array([1, 2, 3])
>>> lpgo = LeavePGroupsOut(n_groups=2)
>>> lpgo.get_n_splits(X, y, groups)
3
>>> lpgo.get_n_splits(groups=groups) # 'groups' is always required
3
>>> print(lpgo)
LeavePGroupsOut(n_groups=2)
>>> for train_index, test_index in lpgo.split(X, y, groups):
... print("TRAIN:", train_index, "TEST:", test_index)
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
... print(X_train, X_test, y_train, y_test)
TRAIN: [2] TEST: [0 1]
[[5 6]] [[1 2]
[3 4]] [1] [1 2]
TRAIN: [1] TEST: [0 2]
[[3 4]] [[1 2]
[5 6]] [2] [1 1]
TRAIN: [0] TEST: [1 2]
[[1 2]] [[3 4]
[5 6]] [1] [2 1]
```
#### Methods
| | |
| --- | --- |
| [`get_n_splits`](#sklearn.model_selection.LeavePGroupsOut.get_n_splits "sklearn.model_selection.LeavePGroupsOut.get_n_splits")([X, y, groups]) | Returns the number of splitting iterations in the cross-validator |
| [`split`](#sklearn.model_selection.LeavePGroupsOut.split "sklearn.model_selection.LeavePGroupsOut.split")(X[, y, groups]) | Generate indices to split data into training and test set. |
get\_n\_splits(*X=None*, *y=None*, *groups=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_split.py#L1311)
Returns the number of splitting iterations in the cross-validator
Parameters:
**X**object
Always ignored, exists for compatibility.
**y**object
Always ignored, exists for compatibility.
**groups**array-like of shape (n\_samples,)
Group labels for the samples used while splitting the dataset into train/test set. This ‘groups’ parameter must always be specified to calculate the number of splits, though the other parameters can be omitted.
Returns:
**n\_splits**int
Returns the number of splitting iterations in the cross-validator.
split(*X*, *y=None*, *groups=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_split.py#L1338)
Generate indices to split data into training and test set.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training data, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**array-like of shape (n\_samples,), default=None
The target variable for supervised learning problems.
**groups**array-like of shape (n\_samples,)
Group labels for the samples used while splitting the dataset into train/test set.
Yields:
**train**ndarray
The training set indices for that split.
**test**ndarray
The testing set indices for that split.
scikit_learn sklearn.neighbors.KNeighborsTransformer sklearn.neighbors.KNeighborsTransformer
=======================================
*class*sklearn.neighbors.KNeighborsTransformer(*\**, *mode='distance'*, *n\_neighbors=5*, *algorithm='auto'*, *leaf\_size=30*, *metric='minkowski'*, *p=2*, *metric\_params=None*, *n\_jobs=1*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_graph.py#L226)
Transform X into a (weighted) graph of k nearest neighbors.
The transformed data is a sparse graph as returned by kneighbors\_graph.
Read more in the [User Guide](../neighbors#neighbors-transformer).
New in version 0.22.
Parameters:
**mode**{‘distance’, ‘connectivity’}, default=’distance’
Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, and ‘distance’ will return the distances between neighbors according to the given metric.
**n\_neighbors**int, default=5
Number of neighbors for each sample in the transformed sparse graph. For compatibility reasons, as each sample is considered as its own neighbor, one extra neighbor will be computed when mode == ‘distance’. In this case, the sparse graph contains (n\_neighbors + 1) neighbors.
**algorithm**{‘auto’, ‘ball\_tree’, ‘kd\_tree’, ‘brute’}, default=’auto’
Algorithm used to compute the nearest neighbors:
* ‘ball\_tree’ will use [`BallTree`](sklearn.neighbors.balltree#sklearn.neighbors.BallTree "sklearn.neighbors.BallTree")
* ‘kd\_tree’ will use [`KDTree`](sklearn.neighbors.kdtree#sklearn.neighbors.KDTree "sklearn.neighbors.KDTree")
* ‘brute’ will use a brute-force search.
* ‘auto’ will attempt to decide the most appropriate algorithm based on the values passed to [`fit`](#sklearn.neighbors.KNeighborsTransformer.fit "sklearn.neighbors.KNeighborsTransformer.fit") method.
Note: fitting on sparse input will override the setting of this parameter, using brute force.
**leaf\_size**int, default=30
Leaf size passed to BallTree or KDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem.
**metric**str or callable, default=’minkowski’
Metric to use for distance computation. Default is “minkowski”, which results in the standard Euclidean distance when p = 2. See the documentation of [scipy.spatial.distance](https://docs.scipy.org/doc/scipy/reference/spatial.distance.html) and the metrics listed in [`distance_metrics`](sklearn.metrics.pairwise.distance_metrics#sklearn.metrics.pairwise.distance_metrics "sklearn.metrics.pairwise.distance_metrics") for valid metric values.
If metric is a callable function, it takes two arrays representing 1D vectors as inputs and must return one value indicating the distance between those vectors. This works for Scipy’s metrics, but is less efficient than passing the metric name as a string.
Distance matrices are not supported.
**p**int, default=2
Parameter for the Minkowski metric from sklearn.metrics.pairwise.pairwise\_distances. When p = 1, this is equivalent to using manhattan\_distance (l1), and euclidean\_distance (l2) for p = 2. For arbitrary p, minkowski\_distance (l\_p) is used.
**metric\_params**dict, default=None
Additional keyword arguments for the metric function.
**n\_jobs**int, default=1
The number of parallel jobs to run for neighbors search. If `-1`, then the number of jobs is set to the number of CPU cores.
Attributes:
**effective\_metric\_**str or callable
The distance metric used. It will be same as the `metric` parameter or a synonym of it, e.g. ‘euclidean’ if the `metric` parameter set to ‘minkowski’ and `p` parameter set to 2.
**effective\_metric\_params\_**dict
Additional keyword arguments for the metric function. For most metrics will be same with `metric_params` parameter, but may also contain the `p` parameter value if the `effective_metric_` attribute is set to ‘minkowski’.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
**n\_samples\_fit\_**int
Number of samples in the fitted data.
See also
[`kneighbors_graph`](sklearn.neighbors.kneighbors_graph#sklearn.neighbors.kneighbors_graph "sklearn.neighbors.kneighbors_graph")
Compute the weighted graph of k-neighbors for points in X.
[`RadiusNeighborsTransformer`](sklearn.neighbors.radiusneighborstransformer#sklearn.neighbors.RadiusNeighborsTransformer "sklearn.neighbors.RadiusNeighborsTransformer")
Transform X into a weighted graph of neighbors nearer than a radius.
#### Examples
```
>>> from sklearn.datasets import load_wine
>>> from sklearn.neighbors import KNeighborsTransformer
>>> X, _ = load_wine(return_X_y=True)
>>> X.shape
(178, 13)
>>> transformer = KNeighborsTransformer(n_neighbors=5, mode='distance')
>>> X_dist_graph = transformer.fit_transform(X)
>>> X_dist_graph.shape
(178, 178)
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.neighbors.KNeighborsTransformer.fit "sklearn.neighbors.KNeighborsTransformer.fit")(X[, y]) | Fit the k-nearest neighbors transformer from the training dataset. |
| [`fit_transform`](#sklearn.neighbors.KNeighborsTransformer.fit_transform "sklearn.neighbors.KNeighborsTransformer.fit_transform")(X[, y]) | Fit to data, then transform it. |
| [`get_feature_names_out`](#sklearn.neighbors.KNeighborsTransformer.get_feature_names_out "sklearn.neighbors.KNeighborsTransformer.get_feature_names_out")([input\_features]) | Get output feature names for transformation. |
| [`get_params`](#sklearn.neighbors.KNeighborsTransformer.get_params "sklearn.neighbors.KNeighborsTransformer.get_params")([deep]) | Get parameters for this estimator. |
| [`kneighbors`](#sklearn.neighbors.KNeighborsTransformer.kneighbors "sklearn.neighbors.KNeighborsTransformer.kneighbors")([X, n\_neighbors, return\_distance]) | Find the K-neighbors of a point. |
| [`kneighbors_graph`](#sklearn.neighbors.KNeighborsTransformer.kneighbors_graph "sklearn.neighbors.KNeighborsTransformer.kneighbors_graph")([X, n\_neighbors, mode]) | Compute the (weighted) graph of k-Neighbors for points in X. |
| [`set_params`](#sklearn.neighbors.KNeighborsTransformer.set_params "sklearn.neighbors.KNeighborsTransformer.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.neighbors.KNeighborsTransformer.transform "sklearn.neighbors.KNeighborsTransformer.transform")(X) | Compute the (weighted) graph of Neighbors for points in X. |
fit(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_graph.py#L368)
Fit the k-nearest neighbors transformer from the training dataset.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features) or (n\_samples, n\_samples) if metric=’precomputed’
Training data.
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**self**KNeighborsTransformer
The fitted k-nearest neighbors transformer.
fit\_transform(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_graph.py#L410)
Fit to data, then transform it.
Fits transformer to X and y with optional parameters fit\_params and returns a transformed version of X.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training set.
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**Xt**sparse matrix of shape (n\_samples, n\_samples)
Xt[i, j] is assigned the weight of edge that connects i to j. Only the neighbors have an explicit value. The diagonal is always explicit. The matrix is of CSR format.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L909)
Get output feature names for transformation.
Parameters:
**input\_features**array-like of str or None, default=None
Only used to validate feature names with the names seen in [`fit`](#sklearn.neighbors.KNeighborsTransformer.fit "sklearn.neighbors.KNeighborsTransformer.fit").
Returns:
**feature\_names\_out**ndarray of str objects
Transformed feature names.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
kneighbors(*X=None*, *n\_neighbors=None*, *return\_distance=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_base.py#L670)
Find the K-neighbors of a point.
Returns indices of and distances to the neighbors of each point.
Parameters:
**X**array-like, shape (n\_queries, n\_features), or (n\_queries, n\_indexed) if metric == ‘precomputed’, default=None
The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor.
**n\_neighbors**int, default=None
Number of neighbors required for each sample. The default is the value passed to the constructor.
**return\_distance**bool, default=True
Whether or not to return the distances.
Returns:
**neigh\_dist**ndarray of shape (n\_queries, n\_neighbors)
Array representing the lengths to points, only present if return\_distance=True.
**neigh\_ind**ndarray of shape (n\_queries, n\_neighbors)
Indices of the nearest points in the population matrix.
#### Examples
In the following example, we construct a NearestNeighbors class from an array representing our data set and ask who’s the closest point to [1,1,1]
```
>>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(n_neighbors=1)
>>> neigh.fit(samples)
NearestNeighbors(n_neighbors=1)
>>> print(neigh.kneighbors([[1., 1., 1.]]))
(array([[0.5]]), array([[2]]))
```
As you can see, it returns [[0.5]], and [[2]], which means that the element is at distance 0.5 and is the third element of samples (indexes start at 0). You can also query for multiple points:
```
>>> X = [[0., 1., 0.], [1., 0., 1.]]
>>> neigh.kneighbors(X, return_distance=False)
array([[1],
[2]]...)
```
kneighbors\_graph(*X=None*, *n\_neighbors=None*, *mode='connectivity'*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_base.py#L860)
Compute the (weighted) graph of k-Neighbors for points in X.
Parameters:
**X**array-like of shape (n\_queries, n\_features), or (n\_queries, n\_indexed) if metric == ‘precomputed’, default=None
The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. For `metric='precomputed'` the shape should be (n\_queries, n\_indexed). Otherwise the shape should be (n\_queries, n\_features).
**n\_neighbors**int, default=None
Number of neighbors for each sample. The default is the value passed to the constructor.
**mode**{‘connectivity’, ‘distance’}, default=’connectivity’
Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are distances between points, type of distance depends on the selected metric parameter in NearestNeighbors class.
Returns:
**A**sparse-matrix of shape (n\_queries, n\_samples\_fit)
`n_samples_fit` is the number of samples in the fitted data. `A[i, j]` gives the weight of the edge connecting `i` to `j`. The matrix is of CSR format.
See also
[`NearestNeighbors.radius_neighbors_graph`](sklearn.neighbors.nearestneighbors#sklearn.neighbors.NearestNeighbors.radius_neighbors_graph "sklearn.neighbors.NearestNeighbors.radius_neighbors_graph")
Compute the (weighted) graph of Neighbors for points in X.
#### Examples
```
>>> X = [[0], [3], [1]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(n_neighbors=2)
>>> neigh.fit(X)
NearestNeighbors(n_neighbors=2)
>>> A = neigh.kneighbors_graph(X)
>>> A.toarray()
array([[1., 0., 1.],
[0., 1., 1.],
[1., 0., 1.]])
```
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_graph.py#L388)
Compute the (weighted) graph of Neighbors for points in X.
Parameters:
**X**array-like of shape (n\_samples\_transform, n\_features)
Sample data.
Returns:
**Xt**sparse matrix of shape (n\_samples\_transform, n\_samples\_fit)
Xt[i, j] is assigned the weight of edge that connects i to j. Only the neighbors have an explicit value. The diagonal is always explicit. The matrix is of CSR format.
Examples using `sklearn.neighbors.KNeighborsTransformer`
--------------------------------------------------------
[Release Highlights for scikit-learn 0.22](../../auto_examples/release_highlights/plot_release_highlights_0_22_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-0-22-0-py)
[Approximate nearest neighbors in TSNE](../../auto_examples/neighbors/approximate_nearest_neighbors#sphx-glr-auto-examples-neighbors-approximate-nearest-neighbors-py)
[Caching nearest neighbors](../../auto_examples/neighbors/plot_caching_nearest_neighbors#sphx-glr-auto-examples-neighbors-plot-caching-nearest-neighbors-py)
| programming_docs |
scikit_learn sklearn.linear_model.HuberRegressor sklearn.linear\_model.HuberRegressor
====================================
*class*sklearn.linear\_model.HuberRegressor(*\**, *epsilon=1.35*, *max\_iter=100*, *alpha=0.0001*, *warm\_start=False*, *fit\_intercept=True*, *tol=1e-05*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_huber.py#L126)
L2-regularized linear regression model that is robust to outliers.
The Huber Regressor optimizes the squared loss for the samples where `|(y - Xw - c) / sigma| < epsilon` and the absolute loss for the samples where `|(y - Xw - c) / sigma| > epsilon`, where the model coefficients `w`, the intercept `c` and the scale `sigma` are parameters to be optimized. The parameter sigma makes sure that if y is scaled up or down by a certain factor, one does not need to rescale epsilon to achieve the same robustness. Note that this does not take into account the fact that the different features of X may be of different scales.
The Huber loss function has the advantage of not being heavily influenced by the outliers while not completely ignoring their effect.
Read more in the [User Guide](../linear_model#huber-regression)
New in version 0.18.
Parameters:
**epsilon**float, greater than 1.0, default=1.35
The parameter epsilon controls the number of samples that should be classified as outliers. The smaller the epsilon, the more robust it is to outliers.
**max\_iter**int, default=100
Maximum number of iterations that `scipy.optimize.minimize(method="L-BFGS-B")` should run for.
**alpha**float, default=0.0001
Strength of the squared L2 regularization. Note that the penalty is equal to `alpha * ||w||^2`. Must be in the range `[0, inf)`.
**warm\_start**bool, default=False
This is useful if the stored attributes of a previously used model has to be reused. If set to False, then the coefficients will be rewritten for every call to fit. See [the Glossary](https://scikit-learn.org/1.1/glossary.html#term-warm_start).
**fit\_intercept**bool, default=True
Whether or not to fit the intercept. This can be set to False if the data is already centered around the origin.
**tol**float, default=1e-05
The iteration will stop when `max{|proj g_i | i = 1, ..., n}` <= `tol` where pg\_i is the i-th component of the projected gradient.
Attributes:
**coef\_**array, shape (n\_features,)
Features got by optimizing the L2-regularized Huber loss.
**intercept\_**float
Bias.
**scale\_**float
The value by which `|y - Xw - c|` is scaled down.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
**n\_iter\_**int
Number of iterations that `scipy.optimize.minimize(method="L-BFGS-B")` has run for.
Changed in version 0.20: In SciPy <= 1.0.0 the number of lbfgs iterations may exceed `max_iter`. `n_iter_` will now report at most `max_iter`.
**outliers\_**array, shape (n\_samples,)
A boolean mask which is set to True where the samples are identified as outliers.
See also
[`RANSACRegressor`](sklearn.linear_model.ransacregressor#sklearn.linear_model.RANSACRegressor "sklearn.linear_model.RANSACRegressor")
RANSAC (RANdom SAmple Consensus) algorithm.
[`TheilSenRegressor`](sklearn.linear_model.theilsenregressor#sklearn.linear_model.TheilSenRegressor "sklearn.linear_model.TheilSenRegressor")
Theil-Sen Estimator robust multivariate regression model.
[`SGDRegressor`](sklearn.linear_model.sgdregressor#sklearn.linear_model.SGDRegressor "sklearn.linear_model.SGDRegressor")
Fitted by minimizing a regularized empirical loss with SGD.
#### References
[1] Peter J. Huber, Elvezio M. Ronchetti, Robust Statistics Concomitant scale estimates, pg 172
[2] Art B. Owen (2006), A robust hybrid of lasso and ridge regression. <https://statweb.stanford.edu/~owen/reports/hhu.pdf>
#### Examples
```
>>> import numpy as np
>>> from sklearn.linear_model import HuberRegressor, LinearRegression
>>> from sklearn.datasets import make_regression
>>> rng = np.random.RandomState(0)
>>> X, y, coef = make_regression(
... n_samples=200, n_features=2, noise=4.0, coef=True, random_state=0)
>>> X[:4] = rng.uniform(10, 20, (4, 2))
>>> y[:4] = rng.uniform(10, 20, 4)
>>> huber = HuberRegressor().fit(X, y)
>>> huber.score(X, y)
-7.284...
>>> huber.predict(X[:1,])
array([806.7200...])
>>> linear = LinearRegression().fit(X, y)
>>> print("True coefficients:", coef)
True coefficients: [20.4923... 34.1698...]
>>> print("Huber coefficients:", huber.coef_)
Huber coefficients: [17.7906... 31.0106...]
>>> print("Linear Regression coefficients:", linear.coef_)
Linear Regression coefficients: [-1.9221... 7.0226...]
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.linear_model.HuberRegressor.fit "sklearn.linear_model.HuberRegressor.fit")(X, y[, sample\_weight]) | Fit the model according to the given training data. |
| [`get_params`](#sklearn.linear_model.HuberRegressor.get_params "sklearn.linear_model.HuberRegressor.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.linear_model.HuberRegressor.predict "sklearn.linear_model.HuberRegressor.predict")(X) | Predict using the linear model. |
| [`score`](#sklearn.linear_model.HuberRegressor.score "sklearn.linear_model.HuberRegressor.score")(X, y[, sample\_weight]) | Return the coefficient of determination of the prediction. |
| [`set_params`](#sklearn.linear_model.HuberRegressor.set_params "sklearn.linear_model.HuberRegressor.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_huber.py#L265)
Fit the model according to the given training data.
Parameters:
**X**array-like, shape (n\_samples, n\_features)
Training vector, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**array-like, shape (n\_samples,)
Target vector relative to X.
**sample\_weight**array-like, shape (n\_samples,)
Weight given to each sample.
Returns:
**self**object
Fitted `HuberRegressor` estimator.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L372)
Predict using the linear model.
Parameters:
**X**array-like or sparse matrix, shape (n\_samples, n\_features)
Samples.
Returns:
**C**array, shape (n\_samples,)
Returns predicted values.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L677)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares `((y_true - y_pred)** 2).sum()` and \(v\) is the total sum of squares `((y_true - y_true.mean()) ** 2).sum()`. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of `y`, disregarding the input features, would get a \(R^2\) score of 0.0.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape `(n_samples, n_samples_fitted)`, where `n_samples_fitted` is the number of samples used in the fitting for the estimator.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True values for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
\(R^2\) of `self.predict(X)` wrt. `y`.
#### Notes
The \(R^2\) score used when calling `score` on a regressor uses `multioutput='uniform_average'` from version 0.23 to keep consistent with default value of [`r2_score`](sklearn.metrics.r2_score#sklearn.metrics.r2_score "sklearn.metrics.r2_score"). This influences the `score` method of all the multioutput regressors (except for [`MultiOutputRegressor`](sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor")).
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.linear_model.HuberRegressor`
----------------------------------------------------
[HuberRegressor vs Ridge on dataset with strong outliers](../../auto_examples/linear_model/plot_huber_vs_ridge#sphx-glr-auto-examples-linear-model-plot-huber-vs-ridge-py)
[Robust linear estimator fitting](../../auto_examples/linear_model/plot_robust_fit#sphx-glr-auto-examples-linear-model-plot-robust-fit-py)
scikit_learn sklearn.cross_decomposition.PLSCanonical sklearn.cross\_decomposition.PLSCanonical
=========================================
*class*sklearn.cross\_decomposition.PLSCanonical(*n\_components=2*, *\**, *scale=True*, *algorithm='nipals'*, *max\_iter=500*, *tol=1e-06*, *copy=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cross_decomposition/_pls.py#L667)
Partial Least Squares transformer and regressor.
Read more in the [User Guide](../cross_decomposition#cross-decomposition).
New in version 0.8.
Parameters:
**n\_components**int, default=2
Number of components to keep. Should be in `[1, min(n_samples,
n_features, n_targets)]`.
**scale**bool, default=True
Whether to scale `X` and `Y`.
**algorithm**{‘nipals’, ‘svd’}, default=’nipals’
The algorithm used to estimate the first singular vectors of the cross-covariance matrix. ‘nipals’ uses the power method while ‘svd’ will compute the whole SVD.
**max\_iter**int, default=500
The maximum number of iterations of the power method when `algorithm='nipals'`. Ignored otherwise.
**tol**float, default=1e-06
The tolerance used as convergence criteria in the power method: the algorithm stops whenever the squared norm of `u_i - u_{i-1}` is less than `tol`, where `u` corresponds to the left singular vector.
**copy**bool, default=True
Whether to copy `X` and `Y` in fit before applying centering, and potentially scaling. If False, these operations will be done inplace, modifying both arrays.
Attributes:
**x\_weights\_**ndarray of shape (n\_features, n\_components)
The left singular vectors of the cross-covariance matrices of each iteration.
**y\_weights\_**ndarray of shape (n\_targets, n\_components)
The right singular vectors of the cross-covariance matrices of each iteration.
**x\_loadings\_**ndarray of shape (n\_features, n\_components)
The loadings of `X`.
**y\_loadings\_**ndarray of shape (n\_targets, n\_components)
The loadings of `Y`.
**x\_rotations\_**ndarray of shape (n\_features, n\_components)
The projection matrix used to transform `X`.
**y\_rotations\_**ndarray of shape (n\_features, n\_components)
The projection matrix used to transform `Y`.
[`coef_`](#sklearn.cross_decomposition.PLSCanonical.coef_ "sklearn.cross_decomposition.PLSCanonical.coef_")ndarray of shape (n\_features, n\_targets)
The coefficients of the linear model.
**intercept\_**ndarray of shape (n\_targets,)
The intercepts of the linear model such that `Y` is approximated as `Y = X @ coef_ + intercept_`.
New in version 1.1.
**n\_iter\_**list of shape (n\_components,)
Number of iterations of the power method, for each component. Empty if `algorithm='svd'`.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`CCA`](sklearn.cross_decomposition.cca#sklearn.cross_decomposition.CCA "sklearn.cross_decomposition.CCA")
Canonical Correlation Analysis.
[`PLSSVD`](sklearn.cross_decomposition.plssvd#sklearn.cross_decomposition.PLSSVD "sklearn.cross_decomposition.PLSSVD")
Partial Least Square SVD.
#### Examples
```
>>> from sklearn.cross_decomposition import PLSCanonical
>>> X = [[0., 0., 1.], [1.,0.,0.], [2.,2.,2.], [2.,5.,4.]]
>>> Y = [[0.1, -0.2], [0.9, 1.1], [6.2, 5.9], [11.9, 12.3]]
>>> plsca = PLSCanonical(n_components=2)
>>> plsca.fit(X, Y)
PLSCanonical()
>>> X_c, Y_c = plsca.transform(X, Y)
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.cross_decomposition.PLSCanonical.fit "sklearn.cross_decomposition.PLSCanonical.fit")(X, Y) | Fit model to data. |
| [`fit_transform`](#sklearn.cross_decomposition.PLSCanonical.fit_transform "sklearn.cross_decomposition.PLSCanonical.fit_transform")(X[, y]) | Learn and apply the dimension reduction on the train data. |
| [`get_feature_names_out`](#sklearn.cross_decomposition.PLSCanonical.get_feature_names_out "sklearn.cross_decomposition.PLSCanonical.get_feature_names_out")([input\_features]) | Get output feature names for transformation. |
| [`get_params`](#sklearn.cross_decomposition.PLSCanonical.get_params "sklearn.cross_decomposition.PLSCanonical.get_params")([deep]) | Get parameters for this estimator. |
| [`inverse_transform`](#sklearn.cross_decomposition.PLSCanonical.inverse_transform "sklearn.cross_decomposition.PLSCanonical.inverse_transform")(X[, Y]) | Transform data back to its original space. |
| [`predict`](#sklearn.cross_decomposition.PLSCanonical.predict "sklearn.cross_decomposition.PLSCanonical.predict")(X[, copy]) | Predict targets of given samples. |
| [`score`](#sklearn.cross_decomposition.PLSCanonical.score "sklearn.cross_decomposition.PLSCanonical.score")(X, y[, sample\_weight]) | Return the coefficient of determination of the prediction. |
| [`set_params`](#sklearn.cross_decomposition.PLSCanonical.set_params "sklearn.cross_decomposition.PLSCanonical.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.cross_decomposition.PLSCanonical.transform "sklearn.cross_decomposition.PLSCanonical.transform")(X[, Y, copy]) | Apply the dimension reduction. |
*property*coef\_
The coefficients of the linear model.
fit(*X*, *Y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cross_decomposition/_pls.py#L198)
Fit model to data.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training vectors, where `n_samples` is the number of samples and `n_features` is the number of predictors.
**Y**array-like of shape (n\_samples,) or (n\_samples, n\_targets)
Target vectors, where `n_samples` is the number of samples and `n_targets` is the number of response variables.
Returns:
**self**object
Fitted model.
fit\_transform(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cross_decomposition/_pls.py#L479)
Learn and apply the dimension reduction on the train data.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training vectors, where `n_samples` is the number of samples and `n_features` is the number of predictors.
**y**array-like of shape (n\_samples, n\_targets), default=None
Target vectors, where `n_samples` is the number of samples and `n_targets` is the number of response variables.
Returns:
**self**ndarray of shape (n\_samples, n\_components)
Return `x_scores` if `Y` is not given, `(x_scores, y_scores)` otherwise.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L909)
Get output feature names for transformation.
Parameters:
**input\_features**array-like of str or None, default=None
Only used to validate feature names with the names seen in [`fit`](#sklearn.cross_decomposition.PLSCanonical.fit "sklearn.cross_decomposition.PLSCanonical.fit").
Returns:
**feature\_names\_out**ndarray of str objects
Transformed feature names.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
inverse\_transform(*X*, *Y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cross_decomposition/_pls.py#L404)
Transform data back to its original space.
Parameters:
**X**array-like of shape (n\_samples, n\_components)
New data, where `n_samples` is the number of samples and `n_components` is the number of pls components.
**Y**array-like of shape (n\_samples, n\_components)
New target, where `n_samples` is the number of samples and `n_components` is the number of pls components.
Returns:
**X\_reconstructed**ndarray of shape (n\_samples, n\_features)
Return the reconstructed `X` data.
**Y\_reconstructed**ndarray of shape (n\_samples, n\_targets)
Return the reconstructed `X` target. Only returned when `Y` is given.
#### Notes
This transformation will only be exact if `n_components=n_features`.
predict(*X*, *copy=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cross_decomposition/_pls.py#L448)
Predict targets of given samples.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Samples.
**copy**bool, default=True
Whether to copy `X` and `Y`, or perform in-place normalization.
Returns:
**y\_pred**ndarray of shape (n\_samples,) or (n\_samples, n\_targets)
Returns predicted values.
#### Notes
This call requires the estimation of a matrix of shape `(n_features, n_targets)`, which may be an issue in high dimensional space.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L677)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares `((y_true - y_pred)** 2).sum()` and \(v\) is the total sum of squares `((y_true - y_true.mean()) ** 2).sum()`. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of `y`, disregarding the input features, would get a \(R^2\) score of 0.0.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape `(n_samples, n_samples_fitted)`, where `n_samples_fitted` is the number of samples used in the fitting for the estimator.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True values for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
\(R^2\) of `self.predict(X)` wrt. `y`.
#### Notes
The \(R^2\) score used when calling `score` on a regressor uses `multioutput='uniform_average'` from version 0.23 to keep consistent with default value of [`r2_score`](sklearn.metrics.r2_score#sklearn.metrics.r2_score "sklearn.metrics.r2_score"). This influences the `score` method of all the multioutput regressors (except for [`MultiOutputRegressor`](sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor")).
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*, *Y=None*, *copy=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cross_decomposition/_pls.py#L365)
Apply the dimension reduction.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Samples to transform.
**Y**array-like of shape (n\_samples, n\_targets), default=None
Target vectors.
**copy**bool, default=True
Whether to copy `X` and `Y`, or perform in-place normalization.
Returns:
**x\_scores, y\_scores**array-like or tuple of array-like
Return `x_scores` if `Y` is not given, `(x_scores, y_scores)` otherwise.
Examples using `sklearn.cross_decomposition.PLSCanonical`
---------------------------------------------------------
[Compare cross decomposition methods](../../auto_examples/cross_decomposition/plot_compare_cross_decomposition#sphx-glr-auto-examples-cross-decomposition-plot-compare-cross-decomposition-py)
| programming_docs |
scikit_learn sklearn.metrics.mean_tweedie_deviance sklearn.metrics.mean\_tweedie\_deviance
=======================================
sklearn.metrics.mean\_tweedie\_deviance(*y\_true*, *y\_pred*, *\**, *sample\_weight=None*, *power=0*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/_regression.py#L1003)
Mean Tweedie deviance regression loss.
Read more in the [User Guide](../model_evaluation#mean-tweedie-deviance).
Parameters:
**y\_true**array-like of shape (n\_samples,)
Ground truth (correct) target values.
**y\_pred**array-like of shape (n\_samples,)
Estimated target values.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
**power**float, default=0
Tweedie power parameter. Either power <= 0 or power >= 1.
The higher `p` the less weight is given to extreme deviations between true and predicted targets.
* power < 0: Extreme stable distribution. Requires: y\_pred > 0.
* power = 0 : Normal distribution, output corresponds to mean\_squared\_error. y\_true and y\_pred can be any real numbers.
* power = 1 : Poisson distribution. Requires: y\_true >= 0 and y\_pred > 0.
* 1 < p < 2 : Compound Poisson distribution. Requires: y\_true >= 0 and y\_pred > 0.
* power = 2 : Gamma distribution. Requires: y\_true > 0 and y\_pred > 0.
* power = 3 : Inverse Gaussian distribution. Requires: y\_true > 0 and y\_pred > 0.
* otherwise : Positive stable distribution. Requires: y\_true > 0 and y\_pred > 0.
Returns:
**loss**float
A non-negative floating point value (the best value is 0.0).
#### Examples
```
>>> from sklearn.metrics import mean_tweedie_deviance
>>> y_true = [2, 0, 1, 4]
>>> y_pred = [0.5, 0.5, 2., 2.]
>>> mean_tweedie_deviance(y_true, y_pred, power=1)
1.4260...
```
Examples using `sklearn.metrics.mean_tweedie_deviance`
------------------------------------------------------
[Tweedie regression on insurance claims](../../auto_examples/linear_model/plot_tweedie_regression_insurance_claims#sphx-glr-auto-examples-linear-model-plot-tweedie-regression-insurance-claims-py)
scikit_learn sklearn.utils._safe_indexing sklearn.utils.\_safe\_indexing
==============================
sklearn.utils.\_safe\_indexing(*X*, *indices*, *\**, *axis=0*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/utils/__init__.py#L290)
Return rows, items or columns of X using indices.
Warning
This utility is documented, but **private**. This means that backward compatibility might be broken without any deprecation cycle.
Parameters:
**X**array-like, sparse-matrix, list, pandas.DataFrame, pandas.Series
Data from which to sample rows, items or columns. `list` are only supported when `axis=0`.
**indices**bool, int, str, slice, array-like
* If `axis=0`, boolean and integer array-like, integer slice, and scalar integer are supported.
* If `axis=1`:
+ to select a single column, `indices` can be of `int` type for all `X` types and `str` only for dataframe. The selected subset will be 1D, unless `X` is a sparse matrix in which case it will be 2D.
+ to select multiples columns, `indices` can be one of the following: `list`, `array`, `slice`. The type used in these containers can be one of the following: `int`, ‘bool’ and `str`. However, `str` is only supported when `X` is a dataframe. The selected subset will be 2D.
**axis**int, default=0
The axis along which `X` will be subsampled. `axis=0` will select rows while `axis=1` will select columns.
Returns:
subset
Subset of X on axis 0 or 1.
#### Notes
CSR, CSC, and LIL sparse matrices are supported. COO sparse matrices are not supported.
scikit_learn sklearn.datasets.load_linnerud sklearn.datasets.load\_linnerud
===============================
sklearn.datasets.load\_linnerud(*\**, *return\_X\_y=False*, *as\_frame=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/datasets/_base.py#L1065)
Load and return the physical exercise Linnerud dataset.
This dataset is suitable for multi-output regression tasks.
| | |
| --- | --- |
| Samples total | 20 |
| Dimensionality | 3 (for both data and target) |
| Features | integer |
| Targets | integer |
Read more in the [User Guide](https://scikit-learn.org/1.1/datasets/toy_dataset.html#linnerrud-dataset).
Parameters:
**return\_X\_y**bool, default=False
If True, returns `(data, target)` instead of a Bunch object. See below for more information about the `data` and `target` object.
New in version 0.18.
**as\_frame**bool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric, string or categorical). The target is a pandas DataFrame or Series depending on the number of target columns. If `return_X_y` is True, then (`data`, `target`) will be pandas DataFrames or Series as described below.
New in version 0.23.
Returns:
**data**[`Bunch`](sklearn.utils.bunch#sklearn.utils.Bunch "sklearn.utils.Bunch")
Dictionary-like object, with the following attributes.
data{ndarray, dataframe} of shape (20, 3)
The data matrix. If `as_frame=True`, `data` will be a pandas DataFrame.
target: {ndarray, dataframe} of shape (20, 3)
The regression targets. If `as_frame=True`, `target` will be a pandas DataFrame.
feature\_names: list
The names of the dataset columns.
target\_names: list
The names of the target columns.
frame: DataFrame of shape (20, 6)
Only present when `as_frame=True`. DataFrame with `data` and `target`.
New in version 0.23.
DESCR: str
The full description of the dataset.
data\_filename: str
The path to the location of the data.
target\_filename: str
The path to the location of the target.
New in version 0.20.
**(data, target)**tuple if `return_X_y` is True
Returns a tuple of two ndarrays or dataframe of shape `(20, 3)`. Each row represents one sample and each column represents the features in `X` and a target in `y` of a given sample.
New in version 0.18.
scikit_learn sklearn.preprocessing.RobustScaler sklearn.preprocessing.RobustScaler
==================================
*class*sklearn.preprocessing.RobustScaler(*\**, *with\_centering=True*, *with\_scaling=True*, *quantile\_range=(25.0, 75.0)*, *copy=True*, *unit\_variance=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_data.py#L1341)
Scale features using statistics that are robust to outliers.
This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range). The IQR is the range between the 1st quartile (25th quantile) and the 3rd quartile (75th quantile).
Centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. Median and interquartile range are then stored to be used on later data using the [`transform`](#sklearn.preprocessing.RobustScaler.transform "sklearn.preprocessing.RobustScaler.transform") method.
Standardization of a dataset is a common requirement for many machine learning estimators. Typically this is done by removing the mean and scaling to unit variance. However, outliers can often influence the sample mean / variance in a negative way. In such cases, the median and the interquartile range often give better results.
New in version 0.17.
Read more in the [User Guide](../preprocessing#preprocessing-scaler).
Parameters:
**with\_centering**bool, default=True
If `True`, center the data before scaling. This will cause [`transform`](#sklearn.preprocessing.RobustScaler.transform "sklearn.preprocessing.RobustScaler.transform") to raise an exception when attempted on sparse matrices, because centering them entails building a dense matrix which in common use cases is likely to be too large to fit in memory.
**with\_scaling**bool, default=True
If `True`, scale the data to interquartile range.
**quantile\_range**tuple (q\_min, q\_max), 0.0 < q\_min < q\_max < 100.0, default=(25.0, 75.0)
Quantile range used to calculate `scale_`. By default this is equal to the IQR, i.e., `q_min` is the first quantile and `q_max` is the third quantile.
New in version 0.18.
**copy**bool, default=True
If `False`, try to avoid a copy and do inplace scaling instead. This is not guaranteed to always work inplace; e.g. if the data is not a NumPy array or scipy.sparse CSR matrix, a copy may still be returned.
**unit\_variance**bool, default=False
If `True`, scale data so that normally distributed features have a variance of 1. In general, if the difference between the x-values of `q_max` and `q_min` for a standard normal distribution is greater than 1, the dataset will be scaled down. If less than 1, the dataset will be scaled up.
New in version 0.24.
Attributes:
**center\_**array of floats
The median value for each feature in the training set.
**scale\_**array of floats
The (scaled) interquartile range for each feature in the training set.
New in version 0.17: *scale\_* attribute.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`robust_scale`](sklearn.preprocessing.robust_scale#sklearn.preprocessing.robust_scale "sklearn.preprocessing.robust_scale")
Equivalent function without the estimator API.
[`sklearn.decomposition.PCA`](sklearn.decomposition.pca#sklearn.decomposition.PCA "sklearn.decomposition.PCA")
Further removes the linear correlation across features with ‘whiten=True’.
#### Notes
For a comparison of the different scalers, transformers, and normalizers, see [examples/preprocessing/plot\_all\_scaling.py](../../auto_examples/preprocessing/plot_all_scaling#sphx-glr-auto-examples-preprocessing-plot-all-scaling-py).
<https://en.wikipedia.org/wiki/Median> <https://en.wikipedia.org/wiki/Interquartile_range>
#### Examples
```
>>> from sklearn.preprocessing import RobustScaler
>>> X = [[ 1., -2., 2.],
... [ -2., 1., 3.],
... [ 4., 1., -2.]]
>>> transformer = RobustScaler().fit(X)
>>> transformer
RobustScaler()
>>> transformer.transform(X)
array([[ 0. , -2. , 0. ],
[-1. , 0. , 0.4],
[ 1. , 0. , -1.6]])
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.preprocessing.RobustScaler.fit "sklearn.preprocessing.RobustScaler.fit")(X[, y]) | Compute the median and quantiles to be used for scaling. |
| [`fit_transform`](#sklearn.preprocessing.RobustScaler.fit_transform "sklearn.preprocessing.RobustScaler.fit_transform")(X[, y]) | Fit to data, then transform it. |
| [`get_feature_names_out`](#sklearn.preprocessing.RobustScaler.get_feature_names_out "sklearn.preprocessing.RobustScaler.get_feature_names_out")([input\_features]) | Get output feature names for transformation. |
| [`get_params`](#sklearn.preprocessing.RobustScaler.get_params "sklearn.preprocessing.RobustScaler.get_params")([deep]) | Get parameters for this estimator. |
| [`inverse_transform`](#sklearn.preprocessing.RobustScaler.inverse_transform "sklearn.preprocessing.RobustScaler.inverse_transform")(X) | Scale back the data to the original representation. |
| [`set_params`](#sklearn.preprocessing.RobustScaler.set_params "sklearn.preprocessing.RobustScaler.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.preprocessing.RobustScaler.transform "sklearn.preprocessing.RobustScaler.transform")(X) | Center and scale the data. |
fit(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_data.py#L1466)
Compute the median and quantiles to be used for scaling.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The data used to compute the median and quantiles used for later scaling along the features axis.
**y**Ignored
Not used, present here for API consistency by convention.
Returns:
**self**object
Fitted scaler.
fit\_transform(*X*, *y=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L839)
Fit to data, then transform it.
Fits transformer to `X` and `y` with optional parameters `fit_params` and returns a transformed version of `X`.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs), default=None
Target values (None for unsupervised transformations).
**\*\*fit\_params**dict
Additional fit parameters.
Returns:
**X\_new**ndarray array of shape (n\_samples, n\_features\_new)
Transformed array.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L880)
Get output feature names for transformation.
Parameters:
**input\_features**array-like of str or None, default=None
Input features.
* If `input_features` is `None`, then `feature_names_in_` is used as feature names in. If `feature_names_in_` is not defined, then the following input feature names are generated: `["x0", "x1", ..., "x(n_features_in_ - 1)"]`.
* If `input_features` is an array-like, then `input_features` must match `feature_names_in_` if `feature_names_in_` is defined.
Returns:
**feature\_names\_out**ndarray of str objects
Same as input features.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
inverse\_transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_data.py#L1565)
Scale back the data to the original representation.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The rescaled data to be transformed back.
Returns:
**X\_tr**{ndarray, sparse matrix} of shape (n\_samples, n\_features)
Transformed array.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/preprocessing/_data.py#L1532)
Center and scale the data.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The data used to scale along the specified axis.
Returns:
**X\_tr**{ndarray, sparse matrix} of shape (n\_samples, n\_features)
Transformed array.
Examples using `sklearn.preprocessing.RobustScaler`
---------------------------------------------------
[Compare the effect of different scalers on data with outliers](../../auto_examples/preprocessing/plot_all_scaling#sphx-glr-auto-examples-preprocessing-plot-all-scaling-py)
scikit_learn sklearn.metrics.cohen_kappa_score sklearn.metrics.cohen\_kappa\_score
===================================
sklearn.metrics.cohen\_kappa\_score(*y1*, *y2*, *\**, *labels=None*, *weights=None*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/_classification.py#L588)
Compute Cohen’s kappa: a statistic that measures inter-annotator agreement.
This function computes Cohen’s kappa [[1]](#r219a3b9132e1-1), a score that expresses the level of agreement between two annotators on a classification problem. It is defined as
\[\kappa = (p\_o - p\_e) / (1 - p\_e)\] where \(p\_o\) is the empirical probability of agreement on the label assigned to any sample (the observed agreement ratio), and \(p\_e\) is the expected agreement when both annotators assign labels randomly. \(p\_e\) is estimated using a per-annotator empirical prior over the class labels [[2]](#r219a3b9132e1-2).
Read more in the [User Guide](../model_evaluation#cohen-kappa).
Parameters:
**y1**array of shape (n\_samples,)
Labels assigned by the first annotator.
**y2**array of shape (n\_samples,)
Labels assigned by the second annotator. The kappa statistic is symmetric, so swapping `y1` and `y2` doesn’t change the value.
**labels**array-like of shape (n\_classes,), default=None
List of labels to index the matrix. This may be used to select a subset of labels. If `None`, all labels that appear at least once in `y1` or `y2` are used.
**weights**{‘linear’, ‘quadratic’}, default=None
Weighting type to calculate the score. `None` means no weighted; “linear” means linear weighted; “quadratic” means quadratic weighted.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**kappa**float
The kappa statistic, which is a number between -1 and 1. The maximum value means complete agreement; zero or lower means chance agreement.
#### References
[[1](#id1)] [J. Cohen (1960). “A coefficient of agreement for nominal scales”. Educational and Psychological Measurement 20(1):37-46.](https://doi.org/10.1177/001316446002000104)
[[2](#id2)] [R. Artstein and M. Poesio (2008). “Inter-coder agreement for computational linguistics”. Computational Linguistics 34(4):555-596](https://www.mitpressjournals.org/doi/pdf/10.1162/coli.07-034-R2).
[3] [Wikipedia entry for the Cohen’s kappa](https://en.wikipedia.org/wiki/Cohen%27s_kappa).
scikit_learn sklearn.metrics.jaccard_score sklearn.metrics.jaccard\_score
==============================
sklearn.metrics.jaccard\_score(*y\_true*, *y\_pred*, *\**, *labels=None*, *pos\_label=1*, *average='binary'*, *sample\_weight=None*, *zero\_division='warn'*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/_classification.py#L667)
Jaccard similarity coefficient score.
The Jaccard index [1], or Jaccard similarity coefficient, defined as the size of the intersection divided by the size of the union of two label sets, is used to compare set of predicted labels for a sample to the corresponding set of labels in `y_true`.
Read more in the [User Guide](../model_evaluation#jaccard-similarity-score).
Parameters:
**y\_true**1d array-like, or label indicator array / sparse matrix
Ground truth (correct) labels.
**y\_pred**1d array-like, or label indicator array / sparse matrix
Predicted labels, as returned by a classifier.
**labels**array-like of shape (n\_classes,), default=None
The set of labels to include when `average != 'binary'`, and their order if `average is None`. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in `y_true` and `y_pred` are used in sorted order.
**pos\_label**str or int, default=1
The class to report if `average='binary'` and the data is binary. If the data are multiclass or multilabel, this will be ignored; setting `labels=[pos_label]` and `average != 'binary'` will report scores for that label only.
**average**{‘micro’, ‘macro’, ‘samples’, ‘weighted’, ‘binary’} or None, default=’binary’
If `None`, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data:
`'binary'`:
Only report results for the class specified by `pos_label`. This is applicable only if targets (`y_{true,pred}`) are binary.
`'micro'`:
Calculate metrics globally by counting the total true positives, false negatives and false positives.
`'macro'`:
Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
`'weighted'`:
Calculate metrics for each label, and find their average, weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance.
`'samples'`:
Calculate metrics for each instance, and find their average (only meaningful for multilabel classification).
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
**zero\_division**“warn”, {0.0, 1.0}, default=”warn”
Sets the value to return when there is a zero division, i.e. when there there are no negative values in predictions and labels. If set to “warn”, this acts like 0, but a warning is also raised.
Returns:
**score**float or ndarray of shape (n\_unique\_labels,), dtype=np.float64
The Jaccard score. When `average` is not `None`, a single scalar is returned.
See also
[`accuracy_score`](sklearn.metrics.accuracy_score#sklearn.metrics.accuracy_score "sklearn.metrics.accuracy_score")
Function for calculating the accuracy score.
[`f1_score`](sklearn.metrics.f1_score#sklearn.metrics.f1_score "sklearn.metrics.f1_score")
Function for calculating the F1 score.
[`multilabel_confusion_matrix`](sklearn.metrics.multilabel_confusion_matrix#sklearn.metrics.multilabel_confusion_matrix "sklearn.metrics.multilabel_confusion_matrix")
Function for computing a confusion matrix for each class or sample.
#### Notes
[`jaccard_score`](#sklearn.metrics.jaccard_score "sklearn.metrics.jaccard_score") may be a poor metric if there are no positives for some samples or classes. Jaccard is undefined if there are no true or predicted labels, and our implementation will return a score of 0 with a warning.
#### References
[1] [Wikipedia entry for the Jaccard index](https://en.wikipedia.org/wiki/Jaccard_index).
#### Examples
```
>>> import numpy as np
>>> from sklearn.metrics import jaccard_score
>>> y_true = np.array([[0, 1, 1],
... [1, 1, 0]])
>>> y_pred = np.array([[1, 1, 1],
... [1, 0, 0]])
```
In the binary case:
```
>>> jaccard_score(y_true[0], y_pred[0])
0.6666...
```
In the 2D comparison case (e.g. image similarity):
```
>>> jaccard_score(y_true, y_pred, average="micro")
0.6
```
In the multilabel case:
```
>>> jaccard_score(y_true, y_pred, average='samples')
0.5833...
>>> jaccard_score(y_true, y_pred, average='macro')
0.6666...
>>> jaccard_score(y_true, y_pred, average=None)
array([0.5, 0.5, 1. ])
```
In the multiclass case:
```
>>> y_pred = [0, 2, 1, 2]
>>> y_true = [0, 1, 2, 2]
>>> jaccard_score(y_true, y_pred, average=None)
array([1. , 0. , 0.33...])
```
Examples using `sklearn.metrics.jaccard_score`
----------------------------------------------
[Classifier Chain](../../auto_examples/multioutput/plot_classifier_chain_yeast#sphx-glr-auto-examples-multioutput-plot-classifier-chain-yeast-py)
| programming_docs |
scikit_learn sklearn.feature_selection.SelectFdr sklearn.feature\_selection.SelectFdr
====================================
*class*sklearn.feature\_selection.SelectFdr(*score\_func=<function f\_classif>*, *\**, *alpha=0.05*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_univariate_selection.py#L765)
Filter: Select the p-values for an estimated false discovery rate.
This uses the Benjamini-Hochberg procedure. `alpha` is an upper bound on the expected false discovery rate.
Read more in the [User Guide](../feature_selection#univariate-feature-selection).
Parameters:
**score\_func**callable, default=f\_classif
Function taking two arrays X and y, and returning a pair of arrays (scores, pvalues). Default is f\_classif (see below “See Also”). The default function only works with classification tasks.
**alpha**float, default=5e-2
The highest uncorrected p-value for features to keep.
Attributes:
**scores\_**array-like of shape (n\_features,)
Scores of features.
**pvalues\_**array-like of shape (n\_features,)
p-values of feature scores.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`f_classif`](sklearn.feature_selection.f_classif#sklearn.feature_selection.f_classif "sklearn.feature_selection.f_classif")
ANOVA F-value between label/feature for classification tasks.
[`mutual_info_classif`](sklearn.feature_selection.mutual_info_classif#sklearn.feature_selection.mutual_info_classif "sklearn.feature_selection.mutual_info_classif")
Mutual information for a discrete target.
[`chi2`](sklearn.feature_selection.chi2#sklearn.feature_selection.chi2 "sklearn.feature_selection.chi2")
Chi-squared stats of non-negative features for classification tasks.
[`f_regression`](sklearn.feature_selection.f_regression#sklearn.feature_selection.f_regression "sklearn.feature_selection.f_regression")
F-value between label/feature for regression tasks.
[`mutual_info_regression`](sklearn.feature_selection.mutual_info_regression#sklearn.feature_selection.mutual_info_regression "sklearn.feature_selection.mutual_info_regression")
Mutual information for a contnuous target.
[`SelectPercentile`](sklearn.feature_selection.selectpercentile#sklearn.feature_selection.SelectPercentile "sklearn.feature_selection.SelectPercentile")
Select features based on percentile of the highest scores.
[`SelectKBest`](sklearn.feature_selection.selectkbest#sklearn.feature_selection.SelectKBest "sklearn.feature_selection.SelectKBest")
Select features based on the k highest scores.
[`SelectFpr`](sklearn.feature_selection.selectfpr#sklearn.feature_selection.SelectFpr "sklearn.feature_selection.SelectFpr")
Select features based on a false positive rate test.
[`SelectFwe`](sklearn.feature_selection.selectfwe#sklearn.feature_selection.SelectFwe "sklearn.feature_selection.SelectFwe")
Select features based on family-wise error rate.
[`GenericUnivariateSelect`](sklearn.feature_selection.genericunivariateselect#sklearn.feature_selection.GenericUnivariateSelect "sklearn.feature_selection.GenericUnivariateSelect")
Univariate feature selector with configurable mode.
#### References
<https://en.wikipedia.org/wiki/False_discovery_rate>
#### Examples
```
>>> from sklearn.datasets import load_breast_cancer
>>> from sklearn.feature_selection import SelectFdr, chi2
>>> X, y = load_breast_cancer(return_X_y=True)
>>> X.shape
(569, 30)
>>> X_new = SelectFdr(chi2, alpha=0.01).fit_transform(X, y)
>>> X_new.shape
(569, 16)
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.feature_selection.SelectFdr.fit "sklearn.feature_selection.SelectFdr.fit")(X, y) | Run score function on (X, y) and get the appropriate features. |
| [`fit_transform`](#sklearn.feature_selection.SelectFdr.fit_transform "sklearn.feature_selection.SelectFdr.fit_transform")(X[, y]) | Fit to data, then transform it. |
| [`get_feature_names_out`](#sklearn.feature_selection.SelectFdr.get_feature_names_out "sklearn.feature_selection.SelectFdr.get_feature_names_out")([input\_features]) | Mask feature names according to selected features. |
| [`get_params`](#sklearn.feature_selection.SelectFdr.get_params "sklearn.feature_selection.SelectFdr.get_params")([deep]) | Get parameters for this estimator. |
| [`get_support`](#sklearn.feature_selection.SelectFdr.get_support "sklearn.feature_selection.SelectFdr.get_support")([indices]) | Get a mask, or integer index, of the features selected. |
| [`inverse_transform`](#sklearn.feature_selection.SelectFdr.inverse_transform "sklearn.feature_selection.SelectFdr.inverse_transform")(X) | Reverse the transformation operation. |
| [`set_params`](#sklearn.feature_selection.SelectFdr.set_params "sklearn.feature_selection.SelectFdr.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.feature_selection.SelectFdr.transform "sklearn.feature_selection.SelectFdr.transform")(X) | Reduce X to the selected features. |
fit(*X*, *y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_univariate_selection.py#L444)
Run score function on (X, y) and get the appropriate features.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The training input samples.
**y**array-like of shape (n\_samples,)
The target values (class labels in classification, real numbers in regression).
Returns:
**self**object
Returns the instance itself.
fit\_transform(*X*, *y=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L839)
Fit to data, then transform it.
Fits transformer to `X` and `y` with optional parameters `fit_params` and returns a transformed version of `X`.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs), default=None
Target values (None for unsupervised transformations).
**\*\*fit\_params**dict
Additional fit parameters.
Returns:
**X\_new**ndarray array of shape (n\_samples, n\_features\_new)
Transformed array.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_base.py#L146)
Mask feature names according to selected features.
Parameters:
**input\_features**array-like of str or None, default=None
Input features.
* If `input_features` is `None`, then `feature_names_in_` is used as feature names in. If `feature_names_in_` is not defined, then the following input feature names are generated: `["x0", "x1", ..., "x(n_features_in_ - 1)"]`.
* If `input_features` is an array-like, then `input_features` must match `feature_names_in_` if `feature_names_in_` is defined.
Returns:
**feature\_names\_out**ndarray of str objects
Transformed feature names.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
get\_support(*indices=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_base.py#L33)
Get a mask, or integer index, of the features selected.
Parameters:
**indices**bool, default=False
If True, the return value will be an array of integers, rather than a boolean mask.
Returns:
**support**array
An index that selects the retained features from a feature vector. If `indices` is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If `indices` is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector.
inverse\_transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_base.py#L106)
Reverse the transformation operation.
Parameters:
**X**array of shape [n\_samples, n\_selected\_features]
The input samples.
Returns:
**X\_r**array of shape [n\_samples, n\_original\_features]
`X` with columns of zeros inserted where features would have been removed by [`transform`](#sklearn.feature_selection.SelectFdr.transform "sklearn.feature_selection.SelectFdr.transform").
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_base.py#L68)
Reduce X to the selected features.
Parameters:
**X**array of shape [n\_samples, n\_features]
The input samples.
Returns:
**X\_r**array of shape [n\_samples, n\_selected\_features]
The input samples with only the selected features.
scikit_learn sklearn.isotonic.isotonic_regression sklearn.isotonic.isotonic\_regression
=====================================
sklearn.isotonic.isotonic\_regression(*y*, *\**, *sample\_weight=None*, *y\_min=None*, *y\_max=None*, *increasing=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/isotonic.py#L80)
Solve the isotonic regression model.
Read more in the [User Guide](../isotonic#isotonic).
Parameters:
**y**array-like of shape (n\_samples,)
The data.
**sample\_weight**array-like of shape (n\_samples,), default=None
Weights on each point of the regression. If None, weight is set to 1 (equal weights).
**y\_min**float, default=None
Lower bound on the lowest predicted value (the minimum value may still be higher). If not set, defaults to -inf.
**y\_max**float, default=None
Upper bound on the highest predicted value (the maximum may still be lower). If not set, defaults to +inf.
**increasing**bool, default=True
Whether to compute `y_` is increasing (if set to True) or decreasing (if set to False).
Returns:
**y\_**list of floats
Isotonic fit of y.
#### References
“Active set algorithms for isotonic regression; A unifying framework” by Michael J. Best and Nilotpal Chakravarti, section 3.
scikit_learn sklearn.cluster.ward_tree sklearn.cluster.ward\_tree
==========================
sklearn.cluster.ward\_tree(*X*, *\**, *connectivity=None*, *n\_clusters=None*, *return\_distance=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_agglomerative.py#L169)
Ward clustering based on a Feature matrix.
Recursively merges the pair of clusters that minimally increases within-cluster variance.
The inertia matrix uses a Heapq-based representation.
This is the structured version, that takes into account some topological structure between samples.
Read more in the [User Guide](../clustering#hierarchical-clustering).
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Feature matrix representing `n_samples` samples to be clustered.
**connectivity**sparse matrix, default=None
Connectivity matrix. Defines for each sample the neighboring samples following a given structure of the data. The matrix is assumed to be symmetric and only the upper triangular half is used. Default is None, i.e, the Ward algorithm is unstructured.
**n\_clusters**int, default=None
`n_clusters` should be less than `n_samples`. Stop early the construction of the tree at `n_clusters.` This is useful to decrease computation time if the number of clusters is not small compared to the number of samples. In this case, the complete tree is not computed, thus the ‘children’ output is of limited use, and the ‘parents’ output should rather be used. This option is valid only when specifying a connectivity matrix.
**return\_distance**bool, default=False
If `True`, return the distance between the clusters.
Returns:
**children**ndarray of shape (n\_nodes-1, 2)
The children of each non-leaf node. Values less than `n_samples` correspond to leaves of the tree which are the original samples. A node `i` greater than or equal to `n_samples` is a non-leaf node and has children `children_[i - n_samples]`. Alternatively at the i-th iteration, children[i][0] and children[i][1] are merged to form node `n_samples + i`.
**n\_connected\_components**int
The number of connected components in the graph.
**n\_leaves**int
The number of leaves in the tree.
**parents**ndarray of shape (n\_nodes,) or None
The parent of each node. Only returned when a connectivity matrix is specified, elsewhere ‘None’ is returned.
**distances**ndarray of shape (n\_nodes-1,)
Only returned if `return_distance` is set to `True` (for compatibility). The distances between the centers of the nodes. `distances[i]` corresponds to a weighted Euclidean distance between the nodes `children[i, 1]` and `children[i, 2]`. If the nodes refer to leaves of the tree, then `distances[i]` is their unweighted Euclidean distance. Distances are updated in the following way (from scipy.hierarchy.linkage):
The new entry \(d(u,v)\) is computed as follows,
\[d(u,v) = \sqrt{\frac{|v|+|s|} {T}d(v,s)^2 + \frac{|v|+|t|} {T}d(v,t)^2 - \frac{|v|} {T}d(s,t)^2}\] where \(u\) is the newly joined cluster consisting of clusters \(s\) and \(t\), \(v\) is an unused cluster in the forest, \(T=|v|+|s|+|t|\), and \(|\*|\) is the cardinality of its argument. This is also known as the incremental algorithm.
scikit_learn sklearn.metrics.v_measure_score sklearn.metrics.v\_measure\_score
=================================
sklearn.metrics.v\_measure\_score(*labels\_true*, *labels\_pred*, *\**, *beta=1.0*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/cluster/_supervised.py#L625)
V-measure cluster labeling given a ground truth.
This score is identical to [`normalized_mutual_info_score`](sklearn.metrics.normalized_mutual_info_score#sklearn.metrics.normalized_mutual_info_score "sklearn.metrics.normalized_mutual_info_score") with the `'arithmetic'` option for averaging.
The V-measure is the harmonic mean between homogeneity and completeness:
```
v = (1 + beta) * homogeneity * completeness
/ (beta * homogeneity + completeness)
```
This metric is independent of the absolute values of the labels: a permutation of the class or cluster label values won’t change the score value in any way.
This metric is furthermore symmetric: switching `label_true` with `label_pred` will return the same score value. This can be useful to measure the agreement of two independent label assignments strategies on the same dataset when the real ground truth is not known.
Read more in the [User Guide](../clustering#homogeneity-completeness).
Parameters:
**labels\_true**int array, shape = [n\_samples]
ground truth class labels to be used as a reference
**labels\_pred**array-like of shape (n\_samples,)
cluster labels to evaluate
**beta**float, default=1.0
Ratio of weight attributed to `homogeneity` vs `completeness`. If `beta` is greater than 1, `completeness` is weighted more strongly in the calculation. If `beta` is less than 1, `homogeneity` is weighted more strongly.
Returns:
**v\_measure**float
score between 0.0 and 1.0. 1.0 stands for perfectly complete labeling
See also
[`homogeneity_score`](sklearn.metrics.homogeneity_score#sklearn.metrics.homogeneity_score "sklearn.metrics.homogeneity_score")
[`completeness_score`](sklearn.metrics.completeness_score#sklearn.metrics.completeness_score "sklearn.metrics.completeness_score")
[`normalized_mutual_info_score`](sklearn.metrics.normalized_mutual_info_score#sklearn.metrics.normalized_mutual_info_score "sklearn.metrics.normalized_mutual_info_score")
#### References
[1] [Andrew Rosenberg and Julia Hirschberg, 2007. V-Measure: A conditional entropy-based external cluster evaluation measure](https://aclweb.org/anthology/D/D07/D07-1043.pdf)
#### Examples
Perfect labelings are both homogeneous and complete, hence have score 1.0:
```
>>> from sklearn.metrics.cluster import v_measure_score
>>> v_measure_score([0, 0, 1, 1], [0, 0, 1, 1])
1.0
>>> v_measure_score([0, 0, 1, 1], [1, 1, 0, 0])
1.0
```
Labelings that assign all classes members to the same clusters are complete but not homogeneous, hence penalized:
```
>>> print("%.6f" % v_measure_score([0, 0, 1, 2], [0, 0, 1, 1]))
0.8...
>>> print("%.6f" % v_measure_score([0, 1, 2, 3], [0, 0, 1, 1]))
0.66...
```
Labelings that have pure clusters with members coming from the same classes are homogeneous but un-necessary splits harm completeness and thus penalize V-measure as well:
```
>>> print("%.6f" % v_measure_score([0, 0, 1, 1], [0, 0, 1, 2]))
0.8...
>>> print("%.6f" % v_measure_score([0, 0, 1, 1], [0, 1, 2, 3]))
0.66...
```
If classes members are completely split across different clusters, the assignment is totally incomplete, hence the V-Measure is null:
```
>>> print("%.6f" % v_measure_score([0, 0, 0, 0], [0, 1, 2, 3]))
0.0...
```
Clusters that include samples from totally different classes totally destroy the homogeneity of the labeling, hence:
```
>>> print("%.6f" % v_measure_score([0, 0, 1, 1], [0, 0, 0, 0]))
0.0...
```
Examples using `sklearn.metrics.v_measure_score`
------------------------------------------------
[Biclustering documents with the Spectral Co-clustering algorithm](../../auto_examples/bicluster/plot_bicluster_newsgroups#sphx-glr-auto-examples-bicluster-plot-bicluster-newsgroups-py)
[A demo of K-Means clustering on the handwritten digits data](../../auto_examples/cluster/plot_kmeans_digits#sphx-glr-auto-examples-cluster-plot-kmeans-digits-py)
[Adjustment for chance in clustering performance evaluation](../../auto_examples/cluster/plot_adjusted_for_chance_measures#sphx-glr-auto-examples-cluster-plot-adjusted-for-chance-measures-py)
[Demo of DBSCAN clustering algorithm](../../auto_examples/cluster/plot_dbscan#sphx-glr-auto-examples-cluster-plot-dbscan-py)
[Demo of affinity propagation clustering algorithm](../../auto_examples/cluster/plot_affinity_propagation#sphx-glr-auto-examples-cluster-plot-affinity-propagation-py)
[Clustering text documents using k-means](../../auto_examples/text/plot_document_clustering#sphx-glr-auto-examples-text-plot-document-clustering-py)
scikit_learn sklearn.metrics.pairwise.linear_kernel sklearn.metrics.pairwise.linear\_kernel
=======================================
sklearn.metrics.pairwise.linear\_kernel(*X*, *Y=None*, *dense\_output=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/pairwise.py#L1162)
Compute the linear kernel between X and Y.
Read more in the [User Guide](../metrics#linear-kernel).
Parameters:
**X**ndarray of shape (n\_samples\_X, n\_features)
A feature array.
**Y**ndarray of shape (n\_samples\_Y, n\_features), default=None
An optional second feature array. If `None`, uses `Y=X`.
**dense\_output**bool, default=True
Whether to return dense output even when the input is sparse. If `False`, the output is sparse if both input arrays are sparse.
New in version 0.20.
Returns:
**Gram matrix**ndarray of shape (n\_samples\_X, n\_samples\_Y)
The Gram matrix of the linear kernel, i.e. `X @ Y.T`.
| programming_docs |
scikit_learn sklearn.utils.sparsefuncs.inplace_swap_row sklearn.utils.sparsefuncs.inplace\_swap\_row
============================================
sklearn.utils.sparsefuncs.inplace\_swap\_row(*X*, *m*, *n*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/utils/sparsefuncs.py#L362)
Swaps two rows of a CSC/CSR matrix in-place.
Parameters:
**X**sparse matrix of shape (n\_samples, n\_features)
Matrix whose two rows are to be swapped. It should be of CSR or CSC format.
**m**int
Index of the row of X to be swapped.
**n**int
Index of the row of X to be swapped.
scikit_learn sklearn.utils.sparsefuncs.inplace_swap_column sklearn.utils.sparsefuncs.inplace\_swap\_column
===============================================
sklearn.utils.sparsefuncs.inplace\_swap\_column(*X*, *m*, *n*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/utils/sparsefuncs.py#L386)
Swap two columns of a CSC/CSR matrix in-place.
Parameters:
**X**sparse matrix of shape (n\_samples, n\_features)
Matrix whose two columns are to be swapped. It should be of CSR or CSC format.
**m**int
Index of the column of X to be swapped.
**n**int
Index of the column of X to be swapped.
scikit_learn sklearn.metrics.pairwise.sigmoid_kernel sklearn.metrics.pairwise.sigmoid\_kernel
========================================
sklearn.metrics.pairwise.sigmoid\_kernel(*X*, *Y=None*, *gamma=None*, *coef0=1*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/pairwise.py#L1232)
Compute the sigmoid kernel between X and Y.
K(X, Y) = tanh(gamma <X, Y> + coef0)
Read more in the [User Guide](../metrics#sigmoid-kernel).
Parameters:
**X**ndarray of shape (n\_samples\_X, n\_features)
A feature array.
**Y**ndarray of shape (n\_samples\_Y, n\_features), default=None
An optional second feature array. If `None`, uses `Y=X`.
**gamma**float, default=None
Coefficient of the vector inner product. If None, defaults to 1.0 / n\_features.
**coef0**float, default=1
Constant offset added to scaled inner product.
Returns:
**Gram matrix**ndarray of shape (n\_samples\_X, n\_samples\_Y)
Sigmoid kernel between two arrays.
scikit_learn sklearn.feature_extraction.image.PatchExtractor sklearn.feature\_extraction.image.PatchExtractor
================================================
*class*sklearn.feature\_extraction.image.PatchExtractor(*\**, *patch\_size=None*, *max\_patches=None*, *random\_state=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_extraction/image.py#L463)
Extracts patches from a collection of images.
Read more in the [User Guide](../feature_extraction#image-feature-extraction).
New in version 0.9.
Parameters:
**patch\_size**tuple of int (patch\_height, patch\_width), default=None
The dimensions of one patch.
**max\_patches**int or float, default=None
The maximum number of patches per image to extract. If `max_patches` is a float in (0, 1), it is taken to mean a proportion of the total number of patches.
**random\_state**int, RandomState instance, default=None
Determines the random number generator used for random sampling when `max_patches is not None`. Use an int to make the randomness deterministic. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
See also
[`reconstruct_from_patches_2d`](sklearn.feature_extraction.image.reconstruct_from_patches_2d#sklearn.feature_extraction.image.reconstruct_from_patches_2d "sklearn.feature_extraction.image.reconstruct_from_patches_2d")
Reconstruct image from all of its patches.
#### Examples
```
>>> from sklearn.datasets import load_sample_images
>>> from sklearn.feature_extraction import image
>>> # Use the array data from the second image in this dataset:
>>> X = load_sample_images().images[1]
>>> print('Image shape: {}'.format(X.shape))
Image shape: (427, 640, 3)
>>> pe = image.PatchExtractor(patch_size=(2, 2))
>>> pe_fit = pe.fit(X)
>>> pe_trans = pe.transform(X)
>>> print('Patches shape: {}'.format(pe_trans.shape))
Patches shape: (545706, 2, 2)
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.feature_extraction.image.PatchExtractor.fit "sklearn.feature_extraction.image.PatchExtractor.fit")(X[, y]) | Do nothing and return the estimator unchanged. |
| [`get_params`](#sklearn.feature_extraction.image.PatchExtractor.get_params "sklearn.feature_extraction.image.PatchExtractor.get_params")([deep]) | Get parameters for this estimator. |
| [`set_params`](#sklearn.feature_extraction.image.PatchExtractor.set_params "sklearn.feature_extraction.image.PatchExtractor.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.feature_extraction.image.PatchExtractor.transform "sklearn.feature_extraction.image.PatchExtractor.transform")(X) | Transform the image samples in `X` into a matrix of patch data. |
fit(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_extraction/image.py#L510)
Do nothing and return the estimator unchanged.
This method is just there to implement the usual API and hence work in pipelines.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training data.
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**self**object
Returns the instance itself.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_extraction/image.py#L531)
Transform the image samples in `X` into a matrix of patch data.
Parameters:
**X**ndarray of shape (n\_samples, image\_height, image\_width) or (n\_samples, image\_height, image\_width, n\_channels)
Array of images from which to extract patches. For color images, the last dimension specifies the channel: a RGB image would have `n_channels=3`.
Returns:
**patches**array of shape (n\_patches, patch\_height, patch\_width) or (n\_patches, patch\_height, patch\_width, n\_channels)
The collection of patches extracted from the images, where `n_patches` is either `n_samples * max_patches` or the total number of patches that can be extracted.
scikit_learn sklearn.cluster.BisectingKMeans sklearn.cluster.BisectingKMeans
===============================
*class*sklearn.cluster.BisectingKMeans(*n\_clusters=8*, *\**, *init='random'*, *n\_init=1*, *random\_state=None*, *max\_iter=300*, *verbose=0*, *tol=0.0001*, *copy\_x=True*, *algorithm='lloyd'*, *bisecting\_strategy='biggest\_inertia'*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_bisect_k_means.py#L76)
Bisecting K-Means clustering.
Read more in the [User Guide](../clustering#bisect-k-means).
New in version 1.1.
Parameters:
**n\_clusters**int, default=8
The number of clusters to form as well as the number of centroids to generate.
**init**{‘k-means++’, ‘random’} or callable, default=’random’
Method for initialization:
‘k-means++’ : selects initial cluster centers for k-mean clustering in a smart way to speed up convergence. See section Notes in k\_init for more details.
‘random’: choose `n_clusters` observations (rows) at random from data for the initial centroids.
If a callable is passed, it should take arguments X, n\_clusters and a random state and return an initialization.
**n\_init**int, default=1
Number of time the inner k-means algorithm will be run with different centroid seeds in each bisection. That will result producing for each bisection best output of n\_init consecutive runs in terms of inertia.
**random\_state**int, RandomState instance or None, default=None
Determines random number generation for centroid initialization in inner K-Means. Use an int to make the randomness deterministic. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**max\_iter**int, default=300
Maximum number of iterations of the inner k-means algorithm at each bisection.
**verbose**int, default=0
Verbosity mode.
**tol**float, default=1e-4
Relative tolerance with regards to Frobenius norm of the difference in the cluster centers of two consecutive iterations to declare convergence. Used in inner k-means algorithm at each bisection to pick best possible clusters.
**copy\_x**bool, default=True
When pre-computing distances it is more numerically accurate to center the data first. If copy\_x is True (default), then the original data is not modified. If False, the original data is modified, and put back before the function returns, but small numerical differences may be introduced by subtracting and then adding the data mean. Note that if the original data is not C-contiguous, a copy will be made even if copy\_x is False. If the original data is sparse, but not in CSR format, a copy will be made even if copy\_x is False.
**algorithm**{“lloyd”, “elkan”}, default=”lloyd”
Inner K-means algorithm used in bisection. The classical EM-style algorithm is `"lloyd"`. The `"elkan"` variation can be more efficient on some datasets with well-defined clusters, by using the triangle inequality. However it’s more memory intensive due to the allocation of an extra array of shape `(n_samples, n_clusters)`.
**bisecting\_strategy**{“biggest\_inertia”, “largest\_cluster”}, default=”biggest\_inertia”
Defines how bisection should be performed:
* “biggest\_inertia” means that BisectingKMeans will always check
all calculated cluster for cluster with biggest SSE (Sum of squared errors) and bisect it. This approach concentrates on precision, but may be costly in terms of execution time (especially for larger amount of data points).
* “largest\_cluster” - BisectingKMeans will always split cluster with
largest amount of points assigned to it from all clusters previously calculated. That should work faster than picking by SSE (‘biggest\_inertia’) and may produce similar results in most cases.
Attributes:
**cluster\_centers\_**ndarray of shape (n\_clusters, n\_features)
Coordinates of cluster centers. If the algorithm stops before fully converging (see `tol` and `max_iter`), these will not be consistent with `labels_`.
**labels\_**ndarray of shape (n\_samples,)
Labels of each point.
**inertia\_**float
Sum of squared distances of samples to their closest cluster center, weighted by the sample weights if provided.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
See also
[`KMeans`](sklearn.cluster.kmeans#sklearn.cluster.KMeans "sklearn.cluster.KMeans")
Original implementation of K-Means algorithm.
#### Notes
It might be inefficient when n\_cluster is less than 3, due to unnecessary calculations for that case.
#### Examples
```
>>> from sklearn.cluster import BisectingKMeans
>>> import numpy as np
>>> X = np.array([[1, 2], [1, 4], [1, 0],
... [10, 2], [10, 4], [10, 0],
... [10, 6], [10, 8], [10, 10]])
>>> bisect_means = BisectingKMeans(n_clusters=3, random_state=0).fit(X)
>>> bisect_means.labels_
array([2, 2, 2, 0, 0, 0, 1, 1, 1], dtype=int32)
>>> bisect_means.predict([[0, 0], [12, 3]])
array([2, 0], dtype=int32)
>>> bisect_means.cluster_centers_
array([[10., 2.],
[10., 8.],
[ 1., 2.]])
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.cluster.BisectingKMeans.fit "sklearn.cluster.BisectingKMeans.fit")(X[, y, sample\_weight]) | Compute bisecting k-means clustering. |
| [`fit_predict`](#sklearn.cluster.BisectingKMeans.fit_predict "sklearn.cluster.BisectingKMeans.fit_predict")(X[, y, sample\_weight]) | Compute cluster centers and predict cluster index for each sample. |
| [`fit_transform`](#sklearn.cluster.BisectingKMeans.fit_transform "sklearn.cluster.BisectingKMeans.fit_transform")(X[, y, sample\_weight]) | Compute clustering and transform X to cluster-distance space. |
| [`get_feature_names_out`](#sklearn.cluster.BisectingKMeans.get_feature_names_out "sklearn.cluster.BisectingKMeans.get_feature_names_out")([input\_features]) | Get output feature names for transformation. |
| [`get_params`](#sklearn.cluster.BisectingKMeans.get_params "sklearn.cluster.BisectingKMeans.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.cluster.BisectingKMeans.predict "sklearn.cluster.BisectingKMeans.predict")(X) | Predict which cluster each sample in X belongs to. |
| [`score`](#sklearn.cluster.BisectingKMeans.score "sklearn.cluster.BisectingKMeans.score")(X[, y, sample\_weight]) | Opposite of the value of X on the K-means objective. |
| [`set_params`](#sklearn.cluster.BisectingKMeans.set_params "sklearn.cluster.BisectingKMeans.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.cluster.BisectingKMeans.transform "sklearn.cluster.BisectingKMeans.transform")(X) | Transform X to a cluster-distance space. |
fit(*X*, *y=None*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_bisect_k_means.py#L358)
Compute bisecting k-means clustering.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Training instances to cluster.
Note
The data will be converted to C ordering, which will cause a memory copy if the given data is not C-contiguous.
**y**Ignored
Not used, present here for API consistency by convention.
**sample\_weight**array-like of shape (n\_samples,), default=None
The weights for each observation in X. If None, all observations are assigned equal weight.
Returns:
self
Fitted estimator.
fit\_predict(*X*, *y=None*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_kmeans.py#L973)
Compute cluster centers and predict cluster index for each sample.
Convenience method; equivalent to calling fit(X) followed by predict(X).
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
New data to transform.
**y**Ignored
Not used, present here for API consistency by convention.
**sample\_weight**array-like of shape (n\_samples,), default=None
The weights for each observation in X. If None, all observations are assigned equal weight.
Returns:
**labels**ndarray of shape (n\_samples,)
Index of the cluster each sample belongs to.
fit\_transform(*X*, *y=None*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_kmeans.py#L1035)
Compute clustering and transform X to cluster-distance space.
Equivalent to fit(X).transform(X), but more efficiently implemented.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
New data to transform.
**y**Ignored
Not used, present here for API consistency by convention.
**sample\_weight**array-like of shape (n\_samples,), default=None
The weights for each observation in X. If None, all observations are assigned equal weight.
Returns:
**X\_new**ndarray of shape (n\_samples, n\_clusters)
X transformed in the new space.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L909)
Get output feature names for transformation.
Parameters:
**input\_features**array-like of str or None, default=None
Only used to validate feature names with the names seen in [`fit`](#sklearn.cluster.BisectingKMeans.fit "sklearn.cluster.BisectingKMeans.fit").
Returns:
**feature\_names\_out**ndarray of str objects
Transformed feature names.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_bisect_k_means.py#L448)
Predict which cluster each sample in X belongs to.
Prediction is made by going down the hierarchical tree in searching of closest leaf cluster.
In the vector quantization literature, `cluster_centers_` is called the code book and each value returned by `predict` is the index of the closest code in the code book.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
New data to predict.
Returns:
**labels**ndarray of shape (n\_samples,)
Index of the cluster each sample belongs to.
score(*X*, *y=None*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_kmeans.py#L1085)
Opposite of the value of X on the K-means objective.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
New data.
**y**Ignored
Not used, present here for API consistency by convention.
**sample\_weight**array-like of shape (n\_samples,), default=None
The weights for each observation in X. If None, all observations are assigned equal weight.
Returns:
**score**float
Opposite of the value of X on the K-means objective.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_kmeans.py#L1059)
Transform X to a cluster-distance space.
In the new space, each dimension is the distance to the cluster centers. Note that even if X is sparse, the array returned by `transform` will typically be dense.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
New data to transform.
Returns:
**X\_new**ndarray of shape (n\_samples, n\_clusters)
X transformed in the new space.
Examples using `sklearn.cluster.BisectingKMeans`
------------------------------------------------
[Release Highlights for scikit-learn 1.1](../../auto_examples/release_highlights/plot_release_highlights_1_1_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-1-1-0-py)
[Bisecting K-Means and Regular K-Means Performance Comparison](../../auto_examples/cluster/plot_bisect_kmeans#sphx-glr-auto-examples-cluster-plot-bisect-kmeans-py)
| programming_docs |
scikit_learn sklearn.datasets.clear_data_home sklearn.datasets.clear\_data\_home
==================================
sklearn.datasets.clear\_data\_home(*data\_home=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/datasets/_base.py#L70)
Delete all the content of the data home cache.
Parameters:
**data\_home**str, default=None
The path to scikit-learn data directory. If `None`, the default path is `~/sklearn_learn_data`.
scikit_learn sklearn.gaussian_process.kernels.WhiteKernel sklearn.gaussian\_process.kernels.WhiteKernel
=============================================
*class*sklearn.gaussian\_process.kernels.WhiteKernel(*noise\_level=1.0*, *noise\_level\_bounds=(1e-05, 100000.0)*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L1300)
White kernel.
The main use-case of this kernel is as part of a sum-kernel where it explains the noise of the signal as independently and identically normally-distributed. The parameter noise\_level equals the variance of this noise.
\[k(x\_1, x\_2) = noise\\_level \text{ if } x\_i == x\_j \text{ else } 0\] Read more in the [User Guide](../gaussian_process#gp-kernels).
New in version 0.18.
Parameters:
**noise\_level**float, default=1.0
Parameter controlling the noise level (variance)
**noise\_level\_bounds**pair of floats >= 0 or “fixed”, default=(1e-5, 1e5)
The lower and upper bound on ‘noise\_level’. If set to “fixed”, ‘noise\_level’ cannot be changed during hyperparameter tuning.
Attributes:
[`bounds`](#sklearn.gaussian_process.kernels.WhiteKernel.bounds "sklearn.gaussian_process.kernels.WhiteKernel.bounds")
Returns the log-transformed bounds on the theta.
**hyperparameter\_noise\_level**
[`hyperparameters`](#sklearn.gaussian_process.kernels.WhiteKernel.hyperparameters "sklearn.gaussian_process.kernels.WhiteKernel.hyperparameters")
Returns a list of all hyperparameter specifications.
[`n_dims`](#sklearn.gaussian_process.kernels.WhiteKernel.n_dims "sklearn.gaussian_process.kernels.WhiteKernel.n_dims")
Returns the number of non-fixed hyperparameters of the kernel.
[`requires_vector_input`](#sklearn.gaussian_process.kernels.WhiteKernel.requires_vector_input "sklearn.gaussian_process.kernels.WhiteKernel.requires_vector_input")
Whether the kernel works only on fixed-length feature vectors.
[`theta`](#sklearn.gaussian_process.kernels.WhiteKernel.theta "sklearn.gaussian_process.kernels.WhiteKernel.theta")
Returns the (flattened, log-transformed) non-fixed hyperparameters.
#### Examples
```
>>> from sklearn.datasets import make_friedman2
>>> from sklearn.gaussian_process import GaussianProcessRegressor
>>> from sklearn.gaussian_process.kernels import DotProduct, WhiteKernel
>>> X, y = make_friedman2(n_samples=500, noise=0, random_state=0)
>>> kernel = DotProduct() + WhiteKernel(noise_level=0.5)
>>> gpr = GaussianProcessRegressor(kernel=kernel,
... random_state=0).fit(X, y)
>>> gpr.score(X, y)
0.3680...
>>> gpr.predict(X[:2,:], return_std=True)
(array([653.0..., 592.1... ]), array([316.6..., 316.6...]))
```
#### Methods
| | |
| --- | --- |
| [`__call__`](#sklearn.gaussian_process.kernels.WhiteKernel.__call__ "sklearn.gaussian_process.kernels.WhiteKernel.__call__")(X[, Y, eval\_gradient]) | Return the kernel k(X, Y) and optionally its gradient. |
| [`clone_with_theta`](#sklearn.gaussian_process.kernels.WhiteKernel.clone_with_theta "sklearn.gaussian_process.kernels.WhiteKernel.clone_with_theta")(theta) | Returns a clone of self with given hyperparameters theta. |
| [`diag`](#sklearn.gaussian_process.kernels.WhiteKernel.diag "sklearn.gaussian_process.kernels.WhiteKernel.diag")(X) | Returns the diagonal of the kernel k(X, X). |
| [`get_params`](#sklearn.gaussian_process.kernels.WhiteKernel.get_params "sklearn.gaussian_process.kernels.WhiteKernel.get_params")([deep]) | Get parameters of this kernel. |
| [`is_stationary`](#sklearn.gaussian_process.kernels.WhiteKernel.is_stationary "sklearn.gaussian_process.kernels.WhiteKernel.is_stationary")() | Returns whether the kernel is stationary. |
| [`set_params`](#sklearn.gaussian_process.kernels.WhiteKernel.set_params "sklearn.gaussian_process.kernels.WhiteKernel.set_params")(\*\*params) | Set the parameters of this kernel. |
\_\_call\_\_(*X*, *Y=None*, *eval\_gradient=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L1349)
Return the kernel k(X, Y) and optionally its gradient.
Parameters:
**X**array-like of shape (n\_samples\_X, n\_features) or list of object
Left argument of the returned kernel k(X, Y)
**Y**array-like of shape (n\_samples\_X, n\_features) or list of object, default=None
Right argument of the returned kernel k(X, Y). If None, k(X, X) is evaluated instead.
**eval\_gradient**bool, default=False
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Only supported when Y is None.
Returns:
**K**ndarray of shape (n\_samples\_X, n\_samples\_Y)
Kernel k(X, Y)
**K\_gradient**ndarray of shape (n\_samples\_X, n\_samples\_X, n\_dims), optional
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when eval\_gradient is True.
*property*bounds
Returns the log-transformed bounds on the theta.
Returns:
**bounds**ndarray of shape (n\_dims, 2)
The log-transformed bounds on the kernel’s hyperparameters theta
clone\_with\_theta(*theta*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L238)
Returns a clone of self with given hyperparameters theta.
Parameters:
**theta**ndarray of shape (n\_dims,)
The hyperparameters
diag(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L1396)
Returns the diagonal of the kernel k(X, X).
The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated.
Parameters:
**X**array-like of shape (n\_samples\_X, n\_features) or list of object
Argument to the kernel.
Returns:
**K\_diag**ndarray of shape (n\_samples\_X,)
Diagonal of kernel k(X, X)
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L158)
Get parameters of this kernel.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
*property*hyperparameters
Returns a list of all hyperparameter specifications.
is\_stationary()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L474)
Returns whether the kernel is stationary.
*property*n\_dims
Returns the number of non-fixed hyperparameters of the kernel.
*property*requires\_vector\_input
Whether the kernel works only on fixed-length feature vectors.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L198)
Set the parameters of this kernel.
The method works on simple kernels as well as on nested kernels. The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Returns:
self
*property*theta
Returns the (flattened, log-transformed) non-fixed hyperparameters.
Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale.
Returns:
**theta**ndarray of shape (n\_dims,)
The non-fixed, log-transformed hyperparameters of the kernel
Examples using `sklearn.gaussian_process.kernels.WhiteKernel`
-------------------------------------------------------------
[Comparison of kernel ridge and Gaussian process regression](../../auto_examples/gaussian_process/plot_compare_gpr_krr#sphx-glr-auto-examples-gaussian-process-plot-compare-gpr-krr-py)
[Gaussian process regression (GPR) on Mauna Loa CO2 data](../../auto_examples/gaussian_process/plot_gpr_co2#sphx-glr-auto-examples-gaussian-process-plot-gpr-co2-py)
[Gaussian process regression (GPR) with noise-level estimation](../../auto_examples/gaussian_process/plot_gpr_noisy#sphx-glr-auto-examples-gaussian-process-plot-gpr-noisy-py)
scikit_learn sklearn.linear_model.MultiTaskElasticNet sklearn.linear\_model.MultiTaskElasticNet
=========================================
*class*sklearn.linear\_model.MultiTaskElasticNet(*alpha=1.0*, *\**, *l1\_ratio=0.5*, *fit\_intercept=True*, *normalize='deprecated'*, *copy\_X=True*, *max\_iter=1000*, *tol=0.0001*, *warm\_start=False*, *random\_state=None*, *selection='cyclic'*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_coordinate_descent.py#L2268)
Multi-task ElasticNet model trained with L1/L2 mixed-norm as regularizer.
The optimization objective for MultiTaskElasticNet is:
```
(1 / (2 * n_samples)) * ||Y - XW||_Fro^2
+ alpha * l1_ratio * ||W||_21
+ 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
```
Where:
```
||W||_21 = sum_i sqrt(sum_j W_ij ^ 2)
```
i.e. the sum of norms of each row.
Read more in the [User Guide](../linear_model#multi-task-elastic-net).
Parameters:
**alpha**float, default=1.0
Constant that multiplies the L1/L2 term. Defaults to 1.0.
**l1\_ratio**float, default=0.5
The ElasticNet mixing parameter, with 0 < l1\_ratio <= 1. For l1\_ratio = 1 the penalty is an L1/L2 penalty. For l1\_ratio = 0 it is an L2 penalty. For `0 < l1_ratio < 1`, the penalty is a combination of L1/L2 and L2.
**fit\_intercept**bool, default=True
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
**normalize**bool, default=False
This parameter is ignored when `fit_intercept` is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use [`StandardScaler`](sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler") before calling `fit` on an estimator with `normalize=False`.
Deprecated since version 1.0: `normalize` was deprecated in version 1.0 and will be removed in 1.2.
**copy\_X**bool, default=True
If `True`, X will be copied; else, it may be overwritten.
**max\_iter**int, default=1000
The maximum number of iterations.
**tol**float, default=1e-4
The tolerance for the optimization: if the updates are smaller than `tol`, the optimization code checks the dual gap for optimality and continues until it is smaller than `tol`.
**warm\_start**bool, default=False
When set to `True`, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See [the Glossary](https://scikit-learn.org/1.1/glossary.html#term-warm_start).
**random\_state**int, RandomState instance, default=None
The seed of the pseudo random number generator that selects a random feature to update. Used when `selection` == ‘random’. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**selection**{‘cyclic’, ‘random’}, default=’cyclic’
If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4.
Attributes:
**intercept\_**ndarray of shape (n\_targets,)
Independent term in decision function.
**coef\_**ndarray of shape (n\_targets, n\_features)
Parameter vector (W in the cost function formula). If a 1D y is passed in at fit (non multi-task usage), `coef_` is then a 1D array. Note that `coef_` stores the transpose of `W`, `W.T`.
**n\_iter\_**int
Number of iterations run by the coordinate descent solver to reach the specified tolerance.
**dual\_gap\_**float
The dual gaps at the end of the optimization.
**eps\_**float
The tolerance scaled scaled by the variance of the target `y`.
[`sparse_coef_`](#sklearn.linear_model.MultiTaskElasticNet.sparse_coef_ "sklearn.linear_model.MultiTaskElasticNet.sparse_coef_")sparse matrix of shape (n\_features,) or (n\_targets, n\_features)
Sparse representation of the fitted `coef_`.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`MultiTaskElasticNetCV`](sklearn.linear_model.multitaskelasticnetcv#sklearn.linear_model.MultiTaskElasticNetCV "sklearn.linear_model.MultiTaskElasticNetCV")
Multi-task L1/L2 ElasticNet with built-in cross-validation.
[`ElasticNet`](sklearn.linear_model.elasticnet#sklearn.linear_model.ElasticNet "sklearn.linear_model.ElasticNet")
Linear regression with combined L1 and L2 priors as regularizer.
[`MultiTaskLasso`](sklearn.linear_model.multitasklasso#sklearn.linear_model.MultiTaskLasso "sklearn.linear_model.MultiTaskLasso")
Multi-task L1/L2 Lasso with built-in cross-validation.
#### Notes
The algorithm used to fit the model is coordinate descent.
To avoid unnecessary memory duplication the X and y arguments of the fit method should be directly passed as Fortran-contiguous numpy arrays.
#### Examples
```
>>> from sklearn import linear_model
>>> clf = linear_model.MultiTaskElasticNet(alpha=0.1)
>>> clf.fit([[0,0], [1, 1], [2, 2]], [[0, 0], [1, 1], [2, 2]])
MultiTaskElasticNet(alpha=0.1)
>>> print(clf.coef_)
[[0.45663524 0.45612256]
[0.45663524 0.45612256]]
>>> print(clf.intercept_)
[0.0872422 0.0872422]
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.linear_model.MultiTaskElasticNet.fit "sklearn.linear_model.MultiTaskElasticNet.fit")(X, y) | Fit MultiTaskElasticNet model with coordinate descent. |
| [`get_params`](#sklearn.linear_model.MultiTaskElasticNet.get_params "sklearn.linear_model.MultiTaskElasticNet.get_params")([deep]) | Get parameters for this estimator. |
| [`path`](#sklearn.linear_model.MultiTaskElasticNet.path "sklearn.linear_model.MultiTaskElasticNet.path")(X, y, \*[, l1\_ratio, eps, n\_alphas, ...]) | Compute elastic net path with coordinate descent. |
| [`predict`](#sklearn.linear_model.MultiTaskElasticNet.predict "sklearn.linear_model.MultiTaskElasticNet.predict")(X) | Predict using the linear model. |
| [`score`](#sklearn.linear_model.MultiTaskElasticNet.score "sklearn.linear_model.MultiTaskElasticNet.score")(X, y[, sample\_weight]) | Return the coefficient of determination of the prediction. |
| [`set_params`](#sklearn.linear_model.MultiTaskElasticNet.set_params "sklearn.linear_model.MultiTaskElasticNet.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_coordinate_descent.py#L2429)
Fit MultiTaskElasticNet model with coordinate descent.
Parameters:
**X**ndarray of shape (n\_samples, n\_features)
Data.
**y**ndarray of shape (n\_samples, n\_targets)
Target. Will be cast to X’s dtype if necessary.
Returns:
**self**object
Fitted estimator.
#### Notes
Coordinate descent is an algorithm that considers each column of data at a time hence it will automatically convert the X input as a Fortran-contiguous numpy array if necessary.
To avoid memory re-allocation it is advised to allocate the initial data in memory directly using that format.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
*static*path(*X*, *y*, *\**, *l1\_ratio=0.5*, *eps=0.001*, *n\_alphas=100*, *alphas=None*, *precompute='auto'*, *Xy=None*, *copy\_X=True*, *coef\_init=None*, *verbose=False*, *return\_n\_iter=False*, *positive=False*, *check\_input=True*, *\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_coordinate_descent.py#L366)
Compute elastic net path with coordinate descent.
The elastic net optimization function varies for mono and multi-outputs.
For mono-output tasks it is:
```
1 / (2 * n_samples) * ||y - Xw||^2_2
+ alpha * l1_ratio * ||w||_1
+ 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2
```
For multi-output tasks it is:
```
(1 / (2 * n_samples)) * ||Y - XW||_Fro^2
+ alpha * l1_ratio * ||W||_21
+ 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
```
Where:
```
||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
```
i.e. the sum of norm of each row.
Read more in the [User Guide](../linear_model#elastic-net).
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If `y` is mono-output then `X` can be sparse.
**y**{array-like, sparse matrix} of shape (n\_samples,) or (n\_samples, n\_targets)
Target values.
**l1\_ratio**float, default=0.5
Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). `l1_ratio=1` corresponds to the Lasso.
**eps**float, default=1e-3
Length of the path. `eps=1e-3` means that `alpha_min / alpha_max = 1e-3`.
**n\_alphas**int, default=100
Number of alphas along the regularization path.
**alphas**ndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically.
**precompute**‘auto’, bool or array-like of shape (n\_features, n\_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to `'auto'` let us decide. The Gram matrix can also be passed as argument.
**Xy**array-like of shape (n\_features,) or (n\_features, n\_targets), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
**copy\_X**bool, default=True
If `True`, X will be copied; else, it may be overwritten.
**coef\_init**ndarray of shape (n\_features, ), default=None
The initial values of the coefficients.
**verbose**bool or int, default=False
Amount of verbosity.
**return\_n\_iter**bool, default=False
Whether to return the number of iterations or not.
**positive**bool, default=False
If set to True, forces coefficients to be positive. (Only allowed when `y.ndim == 1`).
**check\_input**bool, default=True
If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller.
**\*\*params**kwargs
Keyword arguments passed to the coordinate descent solver.
Returns:
**alphas**ndarray of shape (n\_alphas,)
The alphas along the path where models are computed.
**coefs**ndarray of shape (n\_features, n\_alphas) or (n\_targets, n\_features, n\_alphas)
Coefficients along the path.
**dual\_gaps**ndarray of shape (n\_alphas,)
The dual gaps at the end of the optimization for each alpha.
**n\_iters**list of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when `return_n_iter` is set to True).
See also
[`MultiTaskElasticNet`](#sklearn.linear_model.MultiTaskElasticNet "sklearn.linear_model.MultiTaskElasticNet")
Multi-task ElasticNet model trained with L1/L2 mixed-norm as regularizer.
[`MultiTaskElasticNetCV`](sklearn.linear_model.multitaskelasticnetcv#sklearn.linear_model.MultiTaskElasticNetCV "sklearn.linear_model.MultiTaskElasticNetCV")
Multi-task L1/L2 ElasticNet with built-in cross-validation.
[`ElasticNet`](sklearn.linear_model.elasticnet#sklearn.linear_model.ElasticNet "sklearn.linear_model.ElasticNet")
Linear regression with combined L1 and L2 priors as regularizer.
[`ElasticNetCV`](sklearn.linear_model.elasticnetcv#sklearn.linear_model.ElasticNetCV "sklearn.linear_model.ElasticNetCV")
Elastic Net model with iterative fitting along a regularization path.
#### Notes
For an example, see [examples/linear\_model/plot\_lasso\_coordinate\_descent\_path.py](../../auto_examples/linear_model/plot_lasso_coordinate_descent_path#sphx-glr-auto-examples-linear-model-plot-lasso-coordinate-descent-path-py).
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L372)
Predict using the linear model.
Parameters:
**X**array-like or sparse matrix, shape (n\_samples, n\_features)
Samples.
Returns:
**C**array, shape (n\_samples,)
Returns predicted values.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L677)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares `((y_true - y_pred)** 2).sum()` and \(v\) is the total sum of squares `((y_true - y_true.mean()) ** 2).sum()`. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of `y`, disregarding the input features, would get a \(R^2\) score of 0.0.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape `(n_samples, n_samples_fitted)`, where `n_samples_fitted` is the number of samples used in the fitting for the estimator.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True values for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
\(R^2\) of `self.predict(X)` wrt. `y`.
#### Notes
The \(R^2\) score used when calling `score` on a regressor uses `multioutput='uniform_average'` from version 0.23 to keep consistent with default value of [`r2_score`](sklearn.metrics.r2_score#sklearn.metrics.r2_score "sklearn.metrics.r2_score"). This influences the `score` method of all the multioutput regressors (except for [`MultiOutputRegressor`](sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor")).
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
*property*sparse\_coef\_
Sparse representation of the fitted `coef_`.
| programming_docs |
scikit_learn sklearn.gaussian_process.kernels.PairwiseKernel sklearn.gaussian\_process.kernels.PairwiseKernel
================================================
*class*sklearn.gaussian\_process.kernels.PairwiseKernel(*gamma=1.0*, *gamma\_bounds=(1e-05, 100000.0)*, *metric='linear'*, *pairwise\_kernels\_kwargs=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L2225)
Wrapper for kernels in sklearn.metrics.pairwise.
A thin wrapper around the functionality of the kernels in sklearn.metrics.pairwise.
Note: Evaluation of eval\_gradient is not analytic but numeric and all
kernels support only isotropic distances. The parameter gamma is considered to be a hyperparameter and may be optimized. The other kernel parameters are set directly at initialization and are kept fixed.
New in version 0.18.
Parameters:
**gamma**float, default=1.0
Parameter gamma of the pairwise kernel specified by metric. It should be positive.
**gamma\_bounds**pair of floats >= 0 or “fixed”, default=(1e-5, 1e5)
The lower and upper bound on ‘gamma’. If set to “fixed”, ‘gamma’ cannot be changed during hyperparameter tuning.
**metric**{“linear”, “additive\_chi2”, “chi2”, “poly”, “polynomial”, “rbf”, “laplacian”, “sigmoid”, “cosine”} or callable, default=”linear”
The metric to use when calculating kernel between instances in a feature array. If metric is a string, it must be one of the metrics in pairwise.PAIRWISE\_KERNEL\_FUNCTIONS. If metric is “precomputed”, X is assumed to be a kernel matrix. Alternatively, if metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays from X as input and return a value indicating the distance between them.
**pairwise\_kernels\_kwargs**dict, default=None
All entries of this dict (if any) are passed as keyword arguments to the pairwise kernel function.
Attributes:
[`bounds`](#sklearn.gaussian_process.kernels.PairwiseKernel.bounds "sklearn.gaussian_process.kernels.PairwiseKernel.bounds")
Returns the log-transformed bounds on the theta.
**hyperparameter\_gamma**
[`hyperparameters`](#sklearn.gaussian_process.kernels.PairwiseKernel.hyperparameters "sklearn.gaussian_process.kernels.PairwiseKernel.hyperparameters")
Returns a list of all hyperparameter specifications.
[`n_dims`](#sklearn.gaussian_process.kernels.PairwiseKernel.n_dims "sklearn.gaussian_process.kernels.PairwiseKernel.n_dims")
Returns the number of non-fixed hyperparameters of the kernel.
[`requires_vector_input`](#sklearn.gaussian_process.kernels.PairwiseKernel.requires_vector_input "sklearn.gaussian_process.kernels.PairwiseKernel.requires_vector_input")
Returns whether the kernel is defined on fixed-length feature vectors or generic objects.
[`theta`](#sklearn.gaussian_process.kernels.PairwiseKernel.theta "sklearn.gaussian_process.kernels.PairwiseKernel.theta")
Returns the (flattened, log-transformed) non-fixed hyperparameters.
#### Examples
```
>>> from sklearn.datasets import load_iris
>>> from sklearn.gaussian_process import GaussianProcessClassifier
>>> from sklearn.gaussian_process.kernels import PairwiseKernel
>>> X, y = load_iris(return_X_y=True)
>>> kernel = PairwiseKernel(metric='rbf')
>>> gpc = GaussianProcessClassifier(kernel=kernel,
... random_state=0).fit(X, y)
>>> gpc.score(X, y)
0.9733...
>>> gpc.predict_proba(X[:2,:])
array([[0.8880..., 0.05663..., 0.05532...],
[0.8676..., 0.07073..., 0.06165...]])
```
#### Methods
| | |
| --- | --- |
| [`__call__`](#sklearn.gaussian_process.kernels.PairwiseKernel.__call__ "sklearn.gaussian_process.kernels.PairwiseKernel.__call__")(X[, Y, eval\_gradient]) | Return the kernel k(X, Y) and optionally its gradient. |
| [`clone_with_theta`](#sklearn.gaussian_process.kernels.PairwiseKernel.clone_with_theta "sklearn.gaussian_process.kernels.PairwiseKernel.clone_with_theta")(theta) | Returns a clone of self with given hyperparameters theta. |
| [`diag`](#sklearn.gaussian_process.kernels.PairwiseKernel.diag "sklearn.gaussian_process.kernels.PairwiseKernel.diag")(X) | Returns the diagonal of the kernel k(X, X). |
| [`get_params`](#sklearn.gaussian_process.kernels.PairwiseKernel.get_params "sklearn.gaussian_process.kernels.PairwiseKernel.get_params")([deep]) | Get parameters of this kernel. |
| [`is_stationary`](#sklearn.gaussian_process.kernels.PairwiseKernel.is_stationary "sklearn.gaussian_process.kernels.PairwiseKernel.is_stationary")() | Returns whether the kernel is stationary. |
| [`set_params`](#sklearn.gaussian_process.kernels.PairwiseKernel.set_params "sklearn.gaussian_process.kernels.PairwiseKernel.set_params")(\*\*params) | Set the parameters of this kernel. |
\_\_call\_\_(*X*, *Y=None*, *eval\_gradient=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L2298)
Return the kernel k(X, Y) and optionally its gradient.
Parameters:
**X**ndarray of shape (n\_samples\_X, n\_features)
Left argument of the returned kernel k(X, Y)
**Y**ndarray of shape (n\_samples\_Y, n\_features), default=None
Right argument of the returned kernel k(X, Y). If None, k(X, X) if evaluated instead.
**eval\_gradient**bool, default=False
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Only supported when Y is None.
Returns:
**K**ndarray of shape (n\_samples\_X, n\_samples\_Y)
Kernel k(X, Y)
**K\_gradient**ndarray of shape (n\_samples\_X, n\_samples\_X, n\_dims), optional
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when `eval_gradient` is True.
*property*bounds
Returns the log-transformed bounds on the theta.
Returns:
**bounds**ndarray of shape (n\_dims, 2)
The log-transformed bounds on the kernel’s hyperparameters theta
clone\_with\_theta(*theta*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L238)
Returns a clone of self with given hyperparameters theta.
Parameters:
**theta**ndarray of shape (n\_dims,)
The hyperparameters
diag(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L2358)
Returns the diagonal of the kernel k(X, X).
The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated.
Parameters:
**X**ndarray of shape (n\_samples\_X, n\_features)
Left argument of the returned kernel k(X, Y)
Returns:
**K\_diag**ndarray of shape (n\_samples\_X,)
Diagonal of kernel k(X, X)
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L158)
Get parameters of this kernel.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
*property*hyperparameters
Returns a list of all hyperparameter specifications.
is\_stationary()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L2378)
Returns whether the kernel is stationary.
*property*n\_dims
Returns the number of non-fixed hyperparameters of the kernel.
*property*requires\_vector\_input
Returns whether the kernel is defined on fixed-length feature vectors or generic objects. Defaults to True for backward compatibility.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L198)
Set the parameters of this kernel.
The method works on simple kernels as well as on nested kernels. The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Returns:
self
*property*theta
Returns the (flattened, log-transformed) non-fixed hyperparameters.
Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale.
Returns:
**theta**ndarray of shape (n\_dims,)
The non-fixed, log-transformed hyperparameters of the kernel
scikit_learn sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis sklearn.discriminant\_analysis.QuadraticDiscriminantAnalysis
============================================================
*class*sklearn.discriminant\_analysis.QuadraticDiscriminantAnalysis(*\**, *priors=None*, *reg\_param=0.0*, *store\_covariance=False*, *tol=0.0001*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/discriminant_analysis.py#L718)
Quadratic Discriminant Analysis.
A classifier with a quadratic decision boundary, generated by fitting class conditional densities to the data and using Bayes’ rule.
The model fits a Gaussian density to each class.
New in version 0.17: *QuadraticDiscriminantAnalysis*
Read more in the [User Guide](../lda_qda#lda-qda).
Parameters:
**priors**ndarray of shape (n\_classes,), default=None
Class priors. By default, the class proportions are inferred from the training data.
**reg\_param**float, default=0.0
Regularizes the per-class covariance estimates by transforming S2 as `S2 = (1 - reg_param) * S2 + reg_param * np.eye(n_features)`, where S2 corresponds to the `scaling_` attribute of a given class.
**store\_covariance**bool, default=False
If True, the class covariance matrices are explicitly computed and stored in the `self.covariance_` attribute.
New in version 0.17.
**tol**float, default=1.0e-4
Absolute threshold for a singular value to be considered significant, used to estimate the rank of `Xk` where `Xk` is the centered matrix of samples in class k. This parameter does not affect the predictions. It only controls a warning that is raised when features are considered to be colinear.
New in version 0.17.
Attributes:
**covariance\_**list of len n\_classes of ndarray of shape (n\_features, n\_features)
For each class, gives the covariance matrix estimated using the samples of that class. The estimations are unbiased. Only present if `store_covariance` is True.
**means\_**array-like of shape (n\_classes, n\_features)
Class-wise means.
**priors\_**array-like of shape (n\_classes,)
Class priors (sum to 1).
**rotations\_**list of len n\_classes of ndarray of shape (n\_features, n\_k)
For each class k an array of shape (n\_features, n\_k), where `n_k = min(n_features, number of elements in class k)` It is the rotation of the Gaussian distribution, i.e. its principal axis. It corresponds to `V`, the matrix of eigenvectors coming from the SVD of `Xk = U S Vt` where `Xk` is the centered matrix of samples from class k.
**scalings\_**list of len n\_classes of ndarray of shape (n\_k,)
For each class, contains the scaling of the Gaussian distributions along its principal axes, i.e. the variance in the rotated coordinate system. It corresponds to `S^2 /
(n_samples - 1)`, where `S` is the diagonal matrix of singular values from the SVD of `Xk`, where `Xk` is the centered matrix of samples from class k.
**classes\_**ndarray of shape (n\_classes,)
Unique class labels.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`LinearDiscriminantAnalysis`](sklearn.discriminant_analysis.lineardiscriminantanalysis#sklearn.discriminant_analysis.LinearDiscriminantAnalysis "sklearn.discriminant_analysis.LinearDiscriminantAnalysis")
Linear Discriminant Analysis.
#### Examples
```
>>> from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
>>> import numpy as np
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> y = np.array([1, 1, 1, 2, 2, 2])
>>> clf = QuadraticDiscriminantAnalysis()
>>> clf.fit(X, y)
QuadraticDiscriminantAnalysis()
>>> print(clf.predict([[-0.8, -1]]))
[1]
```
#### Methods
| | |
| --- | --- |
| [`decision_function`](#sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis.decision_function "sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis.decision_function")(X) | Apply decision function to an array of samples. |
| [`fit`](#sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis.fit "sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis.fit")(X, y) | Fit the model according to the given training data and parameters. |
| [`get_params`](#sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis.get_params "sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis.predict "sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis.predict")(X) | Perform classification on an array of test vectors X. |
| [`predict_log_proba`](#sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis.predict_log_proba "sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis.predict_log_proba")(X) | Return log of posterior probabilities of classification. |
| [`predict_proba`](#sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis.predict_proba "sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis.predict_proba")(X) | Return posterior probabilities of classification. |
| [`score`](#sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis.score "sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis.score")(X, y[, sample\_weight]) | Return the mean accuracy on the given test data and labels. |
| [`set_params`](#sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis.set_params "sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis.set_params")(\*\*params) | Set the parameters of this estimator. |
decision\_function(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/discriminant_analysis.py#L918)
Apply decision function to an array of samples.
The decision function is equal (up to a constant factor) to the log-posterior of the model, i.e. `log p(y = k | x)`. In a binary classification setting this instead corresponds to the difference `log p(y = 1 | x) - log p(y = 0 | x)`. See [Mathematical formulation of the LDA and QDA classifiers](../lda_qda#lda-qda-math).
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Array of samples (test vectors).
Returns:
**C**ndarray of shape (n\_samples,) or (n\_samples, n\_classes)
Decision function values related to each class, per sample. In the two-class case, the shape is (n\_samples,), giving the log likelihood ratio of the positive class.
fit(*X*, *y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/discriminant_analysis.py#L827)
Fit the model according to the given training data and parameters.
Changed in version 0.19: `store_covariances` has been moved to main constructor as `store_covariance`
Changed in version 0.19: `tol` has been moved to main constructor.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training vector, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**array-like of shape (n\_samples,)
Target values (integers).
Returns:
**self**object
Fitted estimator.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/discriminant_analysis.py#L944)
Perform classification on an array of test vectors X.
The predicted class C for each sample in X is returned.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Vector to be scored, where `n_samples` is the number of samples and `n_features` is the number of features.
Returns:
**C**ndarray of shape (n\_samples,)
Estimated probabilities.
predict\_log\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/discriminant_analysis.py#L984)
Return log of posterior probabilities of classification.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Array of samples/test vectors.
Returns:
**C**ndarray of shape (n\_samples, n\_classes)
Posterior log-probabilities of classification per class.
predict\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/discriminant_analysis.py#L964)
Return posterior probabilities of classification.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Array of samples/test vectors.
Returns:
**C**ndarray of shape (n\_samples, n\_classes)
Posterior probabilities of classification per class.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L640)
Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True labels for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
Mean accuracy of `self.predict(X)` wrt. `y`.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis`
----------------------------------------------------------------------------
[Classifier comparison](../../auto_examples/classification/plot_classifier_comparison#sphx-glr-auto-examples-classification-plot-classifier-comparison-py)
[Linear and Quadratic Discriminant Analysis with covariance ellipsoid](../../auto_examples/classification/plot_lda_qda#sphx-glr-auto-examples-classification-plot-lda-qda-py)
scikit_learn sklearn.linear_model.OrthogonalMatchingPursuitCV sklearn.linear\_model.OrthogonalMatchingPursuitCV
=================================================
*class*sklearn.linear\_model.OrthogonalMatchingPursuitCV(*\**, *copy=True*, *fit\_intercept=True*, *normalize='deprecated'*, *max\_iter=None*, *cv=None*, *n\_jobs=None*, *verbose=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_omp.py#L867)
Cross-validated Orthogonal Matching Pursuit model (OMP).
See glossary entry for [cross-validation estimator](https://scikit-learn.org/1.1/glossary.html#term-cross-validation-estimator).
Read more in the [User Guide](../linear_model#omp).
Parameters:
**copy**bool, default=True
Whether the design matrix X must be copied by the algorithm. A false value is only helpful if X is already Fortran-ordered, otherwise a copy is made anyway.
**fit\_intercept**bool, default=True
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
**normalize**bool, default=True
This parameter is ignored when `fit_intercept` is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use [`StandardScaler`](sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler") before calling `fit` on an estimator with `normalize=False`.
Deprecated since version 1.0: `normalize` was deprecated in version 1.0. It will default to False in 1.2 and be removed in 1.4.
**max\_iter**int, default=None
Maximum numbers of iterations to perform, therefore maximum features to include. 10% of `n_features` but at least 5 if available.
**cv**int, cross-validation generator or iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are:
* None, to use the default 5-fold cross-validation,
* integer, to specify the number of folds.
* [CV splitter](https://scikit-learn.org/1.1/glossary.html#term-CV-splitter),
* An iterable yielding (train, test) splits as arrays of indices.
For integer/None inputs, `KFold` is used.
Refer [User Guide](../cross_validation#cross-validation) for the various cross-validation strategies that can be used here.
Changed in version 0.22: `cv` default value if None changed from 3-fold to 5-fold.
**n\_jobs**int, default=None
Number of CPUs to use during the cross validation. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
**verbose**bool or int, default=False
Sets the verbosity amount.
Attributes:
**intercept\_**float or ndarray of shape (n\_targets,)
Independent term in decision function.
**coef\_**ndarray of shape (n\_features,) or (n\_targets, n\_features)
Parameter vector (w in the problem formulation).
**n\_nonzero\_coefs\_**int
Estimated number of non-zero coefficients giving the best mean squared error over the cross-validation folds.
**n\_iter\_**int or array-like
Number of active features across every target for the model refit with the best hyperparameters got by cross-validating across all folds.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`orthogonal_mp`](sklearn.linear_model.orthogonal_mp#sklearn.linear_model.orthogonal_mp "sklearn.linear_model.orthogonal_mp")
Solves n\_targets Orthogonal Matching Pursuit problems.
[`orthogonal_mp_gram`](sklearn.linear_model.orthogonal_mp_gram#sklearn.linear_model.orthogonal_mp_gram "sklearn.linear_model.orthogonal_mp_gram")
Solves n\_targets Orthogonal Matching Pursuit problems using only the Gram matrix X.T \* X and the product X.T \* y.
[`lars_path`](sklearn.linear_model.lars_path#sklearn.linear_model.lars_path "sklearn.linear_model.lars_path")
Compute Least Angle Regression or Lasso path using LARS algorithm.
[`Lars`](sklearn.linear_model.lars#sklearn.linear_model.Lars "sklearn.linear_model.Lars")
Least Angle Regression model a.k.a. LAR.
[`LassoLars`](sklearn.linear_model.lassolars#sklearn.linear_model.LassoLars "sklearn.linear_model.LassoLars")
Lasso model fit with Least Angle Regression a.k.a. Lars.
[`OrthogonalMatchingPursuit`](sklearn.linear_model.orthogonalmatchingpursuit#sklearn.linear_model.OrthogonalMatchingPursuit "sklearn.linear_model.OrthogonalMatchingPursuit")
Orthogonal Matching Pursuit model (OMP).
[`LarsCV`](sklearn.linear_model.larscv#sklearn.linear_model.LarsCV "sklearn.linear_model.LarsCV")
Cross-validated Least Angle Regression model.
[`LassoLarsCV`](sklearn.linear_model.lassolarscv#sklearn.linear_model.LassoLarsCV "sklearn.linear_model.LassoLarsCV")
Cross-validated Lasso model fit with Least Angle Regression.
[`sklearn.decomposition.sparse_encode`](sklearn.decomposition.sparse_encode#sklearn.decomposition.sparse_encode "sklearn.decomposition.sparse_encode")
Generic sparse coding. Each column of the result is the solution to a Lasso problem.
#### Notes
In `fit`, once the optimal number of non-zero coefficients is found through cross-validation, the model is fit again using the entire training set.
#### Examples
```
>>> from sklearn.linear_model import OrthogonalMatchingPursuitCV
>>> from sklearn.datasets import make_regression
>>> X, y = make_regression(n_features=100, n_informative=10,
... noise=4, random_state=0)
>>> reg = OrthogonalMatchingPursuitCV(cv=5, normalize=False).fit(X, y)
>>> reg.score(X, y)
0.9991...
>>> reg.n_nonzero_coefs_
10
>>> reg.predict(X[:1,])
array([-78.3854...])
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.linear_model.OrthogonalMatchingPursuitCV.fit "sklearn.linear_model.OrthogonalMatchingPursuitCV.fit")(X, y) | Fit the model using X, y as training data. |
| [`get_params`](#sklearn.linear_model.OrthogonalMatchingPursuitCV.get_params "sklearn.linear_model.OrthogonalMatchingPursuitCV.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.linear_model.OrthogonalMatchingPursuitCV.predict "sklearn.linear_model.OrthogonalMatchingPursuitCV.predict")(X) | Predict using the linear model. |
| [`score`](#sklearn.linear_model.OrthogonalMatchingPursuitCV.score "sklearn.linear_model.OrthogonalMatchingPursuitCV.score")(X, y[, sample\_weight]) | Return the coefficient of determination of the prediction. |
| [`set_params`](#sklearn.linear_model.OrthogonalMatchingPursuitCV.set_params "sklearn.linear_model.OrthogonalMatchingPursuitCV.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_omp.py#L1008)
Fit the model using X, y as training data.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training data.
**y**array-like of shape (n\_samples,)
Target values. Will be cast to X’s dtype if necessary.
Returns:
**self**object
Returns an instance of self.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L372)
Predict using the linear model.
Parameters:
**X**array-like or sparse matrix, shape (n\_samples, n\_features)
Samples.
Returns:
**C**array, shape (n\_samples,)
Returns predicted values.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L677)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares `((y_true - y_pred)** 2).sum()` and \(v\) is the total sum of squares `((y_true - y_true.mean()) ** 2).sum()`. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of `y`, disregarding the input features, would get a \(R^2\) score of 0.0.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape `(n_samples, n_samples_fitted)`, where `n_samples_fitted` is the number of samples used in the fitting for the estimator.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True values for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
\(R^2\) of `self.predict(X)` wrt. `y`.
#### Notes
The \(R^2\) score used when calling `score` on a regressor uses `multioutput='uniform_average'` from version 0.23 to keep consistent with default value of [`r2_score`](sklearn.metrics.r2_score#sklearn.metrics.r2_score "sklearn.metrics.r2_score"). This influences the `score` method of all the multioutput regressors (except for [`MultiOutputRegressor`](sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor")).
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.linear_model.OrthogonalMatchingPursuitCV`
-----------------------------------------------------------------
[Orthogonal Matching Pursuit](../../auto_examples/linear_model/plot_omp#sphx-glr-auto-examples-linear-model-plot-omp-py)
| programming_docs |
scikit_learn sklearn.utils.all_estimators sklearn.utils.all\_estimators
=============================
sklearn.utils.all\_estimators(*type\_filter=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/utils/__init__.py#L1150)
Get a list of all estimators from sklearn.
This function crawls the module and gets all classes that inherit from BaseEstimator. Classes that are defined in test-modules are not included.
Parameters:
**type\_filter**{“classifier”, “regressor”, “cluster”, “transformer”} or list of such str, default=None
Which kind of estimators should be returned. If None, no filter is applied and all estimators are returned. Possible values are ‘classifier’, ‘regressor’, ‘cluster’ and ‘transformer’ to get estimators only of these specific types, or a list of these to get the estimators that fit at least one of the types.
Returns:
**estimators**list of tuples
List of (name, class), where `name` is the class name as string and `class` is the actual type of the class.
scikit_learn sklearn.datasets.fetch_species_distributions sklearn.datasets.fetch\_species\_distributions
==============================================
sklearn.datasets.fetch\_species\_distributions(*\**, *data\_home=None*, *download\_if\_missing=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/datasets/_species_distributions.py#L140)
Loader for species distribution dataset from Phillips et. al. (2006)
Read more in the [User Guide](../../datasets#datasets).
Parameters:
**data\_home**str, default=None
Specify another download and cache folder for the datasets. By default all scikit-learn data is stored in ‘~/scikit\_learn\_data’ subfolders.
**download\_if\_missing**bool, default=True
If False, raise a IOError if the data is not locally available instead of trying to download the data from the source site.
Returns:
**data**[`Bunch`](sklearn.utils.bunch#sklearn.utils.Bunch "sklearn.utils.Bunch")
Dictionary-like object, with the following attributes.
coveragesarray, shape = [14, 1592, 1212]
These represent the 14 features measured at each point of the map grid. The latitude/longitude values for the grid are discussed below. Missing data is represented by the value -9999.
trainrecord array, shape = (1624,)
The training points for the data. Each point has three fields:
* train[‘species’] is the species name
* train[‘dd long’] is the longitude, in degrees
* train[‘dd lat’] is the latitude, in degrees
testrecord array, shape = (620,)
The test points for the data. Same format as the training data.
Nx, Nyintegers
The number of longitudes (x) and latitudes (y) in the grid
x\_left\_lower\_corner, y\_left\_lower\_cornerfloats
The (x,y) position of the lower-left corner, in degrees
grid\_sizefloat
The spacing between points of the grid, in degrees
#### Notes
This dataset represents the geographic distribution of species. The dataset is provided by Phillips et. al. (2006).
The two species are:
* [“Bradypus variegatus”](http://www.iucnredlist.org/details/3038/0) , the Brown-throated Sloth.
* [“Microryzomys minutus”](http://www.iucnredlist.org/details/13408/0) , also known as the Forest Small Rice Rat, a rodent that lives in Peru, Colombia, Ecuador, Peru, and Venezuela.
* For an example of using this dataset with scikit-learn, see [examples/applications/plot\_species\_distribution\_modeling.py](../../auto_examples/applications/plot_species_distribution_modeling#sphx-glr-auto-examples-applications-plot-species-distribution-modeling-py).
#### References
* [“Maximum entropy modeling of species geographic distributions”](http://rob.schapire.net/papers/ecolmod.pdf) S. J. Phillips, R. P. Anderson, R. E. Schapire - Ecological Modelling, 190:231-259, 2006.
Examples using `sklearn.datasets.fetch_species_distributions`
-------------------------------------------------------------
[Species distribution modeling](../../auto_examples/applications/plot_species_distribution_modeling#sphx-glr-auto-examples-applications-plot-species-distribution-modeling-py)
[Kernel Density Estimate of Species Distributions](../../auto_examples/neighbors/plot_species_kde#sphx-glr-auto-examples-neighbors-plot-species-kde-py)
scikit_learn sklearn.isotonic.check_increasing sklearn.isotonic.check\_increasing
==================================
sklearn.isotonic.check\_increasing(*x*, *y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/isotonic.py#L21)
Determine whether y is monotonically correlated with x.
y is found increasing or decreasing with respect to x based on a Spearman correlation test.
Parameters:
**x**array-like of shape (n\_samples,)
Training data.
**y**array-like of shape (n\_samples,)
Training target.
Returns:
**increasing\_bool**boolean
Whether the relationship is increasing or decreasing.
#### Notes
The Spearman correlation coefficient is estimated from the data, and the sign of the resulting estimate is used as the result.
In the event that the 95% confidence interval based on Fisher transform spans zero, a warning is raised.
#### References
Fisher transformation. Wikipedia. <https://en.wikipedia.org/wiki/Fisher_transformation>
scikit_learn sklearn.feature_selection.SelectFwe sklearn.feature\_selection.SelectFwe
====================================
*class*sklearn.feature\_selection.SelectFwe(*score\_func=<function f\_classif>*, *\**, *alpha=0.05*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_univariate_selection.py#L851)
Filter: Select the p-values corresponding to Family-wise error rate.
Read more in the [User Guide](../feature_selection#univariate-feature-selection).
Parameters:
**score\_func**callable, default=f\_classif
Function taking two arrays X and y, and returning a pair of arrays (scores, pvalues). Default is f\_classif (see below “See Also”). The default function only works with classification tasks.
**alpha**float, default=5e-2
The highest uncorrected p-value for features to keep.
Attributes:
**scores\_**array-like of shape (n\_features,)
Scores of features.
**pvalues\_**array-like of shape (n\_features,)
p-values of feature scores.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`f_classif`](sklearn.feature_selection.f_classif#sklearn.feature_selection.f_classif "sklearn.feature_selection.f_classif")
ANOVA F-value between label/feature for classification tasks.
[`chi2`](sklearn.feature_selection.chi2#sklearn.feature_selection.chi2 "sklearn.feature_selection.chi2")
Chi-squared stats of non-negative features for classification tasks.
[`f_regression`](sklearn.feature_selection.f_regression#sklearn.feature_selection.f_regression "sklearn.feature_selection.f_regression")
F-value between label/feature for regression tasks.
[`SelectPercentile`](sklearn.feature_selection.selectpercentile#sklearn.feature_selection.SelectPercentile "sklearn.feature_selection.SelectPercentile")
Select features based on percentile of the highest scores.
[`SelectKBest`](sklearn.feature_selection.selectkbest#sklearn.feature_selection.SelectKBest "sklearn.feature_selection.SelectKBest")
Select features based on the k highest scores.
[`SelectFpr`](sklearn.feature_selection.selectfpr#sklearn.feature_selection.SelectFpr "sklearn.feature_selection.SelectFpr")
Select features based on a false positive rate test.
[`SelectFdr`](sklearn.feature_selection.selectfdr#sklearn.feature_selection.SelectFdr "sklearn.feature_selection.SelectFdr")
Select features based on an estimated false discovery rate.
[`GenericUnivariateSelect`](sklearn.feature_selection.genericunivariateselect#sklearn.feature_selection.GenericUnivariateSelect "sklearn.feature_selection.GenericUnivariateSelect")
Univariate feature selector with configurable mode.
#### Examples
```
>>> from sklearn.datasets import load_breast_cancer
>>> from sklearn.feature_selection import SelectFwe, chi2
>>> X, y = load_breast_cancer(return_X_y=True)
>>> X.shape
(569, 30)
>>> X_new = SelectFwe(chi2, alpha=0.01).fit_transform(X, y)
>>> X_new.shape
(569, 15)
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.feature_selection.SelectFwe.fit "sklearn.feature_selection.SelectFwe.fit")(X, y) | Run score function on (X, y) and get the appropriate features. |
| [`fit_transform`](#sklearn.feature_selection.SelectFwe.fit_transform "sklearn.feature_selection.SelectFwe.fit_transform")(X[, y]) | Fit to data, then transform it. |
| [`get_feature_names_out`](#sklearn.feature_selection.SelectFwe.get_feature_names_out "sklearn.feature_selection.SelectFwe.get_feature_names_out")([input\_features]) | Mask feature names according to selected features. |
| [`get_params`](#sklearn.feature_selection.SelectFwe.get_params "sklearn.feature_selection.SelectFwe.get_params")([deep]) | Get parameters for this estimator. |
| [`get_support`](#sklearn.feature_selection.SelectFwe.get_support "sklearn.feature_selection.SelectFwe.get_support")([indices]) | Get a mask, or integer index, of the features selected. |
| [`inverse_transform`](#sklearn.feature_selection.SelectFwe.inverse_transform "sklearn.feature_selection.SelectFwe.inverse_transform")(X) | Reverse the transformation operation. |
| [`set_params`](#sklearn.feature_selection.SelectFwe.set_params "sklearn.feature_selection.SelectFwe.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.feature_selection.SelectFwe.transform "sklearn.feature_selection.SelectFwe.transform")(X) | Reduce X to the selected features. |
fit(*X*, *y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_univariate_selection.py#L444)
Run score function on (X, y) and get the appropriate features.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The training input samples.
**y**array-like of shape (n\_samples,)
The target values (class labels in classification, real numbers in regression).
Returns:
**self**object
Returns the instance itself.
fit\_transform(*X*, *y=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L839)
Fit to data, then transform it.
Fits transformer to `X` and `y` with optional parameters `fit_params` and returns a transformed version of `X`.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs), default=None
Target values (None for unsupervised transformations).
**\*\*fit\_params**dict
Additional fit parameters.
Returns:
**X\_new**ndarray array of shape (n\_samples, n\_features\_new)
Transformed array.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_base.py#L146)
Mask feature names according to selected features.
Parameters:
**input\_features**array-like of str or None, default=None
Input features.
* If `input_features` is `None`, then `feature_names_in_` is used as feature names in. If `feature_names_in_` is not defined, then the following input feature names are generated: `["x0", "x1", ..., "x(n_features_in_ - 1)"]`.
* If `input_features` is an array-like, then `input_features` must match `feature_names_in_` if `feature_names_in_` is defined.
Returns:
**feature\_names\_out**ndarray of str objects
Transformed feature names.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
get\_support(*indices=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_base.py#L33)
Get a mask, or integer index, of the features selected.
Parameters:
**indices**bool, default=False
If True, the return value will be an array of integers, rather than a boolean mask.
Returns:
**support**array
An index that selects the retained features from a feature vector. If `indices` is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If `indices` is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector.
inverse\_transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_base.py#L106)
Reverse the transformation operation.
Parameters:
**X**array of shape [n\_samples, n\_selected\_features]
The input samples.
Returns:
**X\_r**array of shape [n\_samples, n\_original\_features]
`X` with columns of zeros inserted where features would have been removed by [`transform`](#sklearn.feature_selection.SelectFwe.transform "sklearn.feature_selection.SelectFwe.transform").
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_selection/_base.py#L68)
Reduce X to the selected features.
Parameters:
**X**array of shape [n\_samples, n\_features]
The input samples.
Returns:
**X\_r**array of shape [n\_samples, n\_selected\_features]
The input samples with only the selected features.
scikit_learn sklearn.linear_model.RidgeCV sklearn.linear\_model.RidgeCV
=============================
*class*sklearn.linear\_model.RidgeCV(*alphas=(0.1, 1.0, 10.0)*, *\**, *fit\_intercept=True*, *normalize='deprecated'*, *scoring=None*, *cv=None*, *gcv\_mode=None*, *store\_cv\_values=False*, *alpha\_per\_target=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_ridge.py#L2207)
Ridge regression with built-in cross-validation.
See glossary entry for [cross-validation estimator](https://scikit-learn.org/1.1/glossary.html#term-cross-validation-estimator).
By default, it performs efficient Leave-One-Out Cross-Validation.
Read more in the [User Guide](../linear_model#ridge-regression).
Parameters:
**alphas**ndarray of shape (n\_alphas,), default=(0.1, 1.0, 10.0)
Array of alpha values to try. Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to `1 / (2C)` in other linear models such as [`LogisticRegression`](sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression "sklearn.linear_model.LogisticRegression") or [`LinearSVC`](sklearn.svm.linearsvc#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC"). If using Leave-One-Out cross-validation, alphas must be positive.
**fit\_intercept**bool, default=True
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
**normalize**bool, default=False
This parameter is ignored when `fit_intercept` is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use [`StandardScaler`](sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler") before calling `fit` on an estimator with `normalize=False`.
Deprecated since version 1.0: `normalize` was deprecated in version 1.0 and will be removed in 1.2.
**scoring**str, callable, default=None
A string (see model evaluation documentation) or a scorer callable object / function with signature `scorer(estimator, X, y)`. If None, the negative mean squared error if cv is ‘auto’ or None (i.e. when using leave-one-out cross-validation), and r2 score otherwise.
**cv**int, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are:
* None, to use the efficient Leave-One-Out cross-validation
* integer, to specify the number of folds.
* [CV splitter](https://scikit-learn.org/1.1/glossary.html#term-CV-splitter),
* An iterable yielding (train, test) splits as arrays of indices.
For integer/None inputs, if `y` is binary or multiclass, [`StratifiedKFold`](sklearn.model_selection.stratifiedkfold#sklearn.model_selection.StratifiedKFold "sklearn.model_selection.StratifiedKFold") is used, else, [`KFold`](sklearn.model_selection.kfold#sklearn.model_selection.KFold "sklearn.model_selection.KFold") is used.
Refer [User Guide](../cross_validation#cross-validation) for the various cross-validation strategies that can be used here.
**gcv\_mode**{‘auto’, ‘svd’, ‘eigen’}, default=’auto’
Flag indicating which strategy to use when performing Leave-One-Out Cross-Validation. Options are:
```
'auto' : use 'svd' if n_samples > n_features, otherwise use 'eigen'
'svd' : force use of singular value decomposition of X when X is
dense, eigenvalue decomposition of X^T.X when X is sparse.
'eigen' : force computation via eigendecomposition of X.X^T
```
The ‘auto’ mode is the default and is intended to pick the cheaper option of the two depending on the shape of the training data.
**store\_cv\_values**bool, default=False
Flag indicating if the cross-validation values corresponding to each alpha should be stored in the `cv_values_` attribute (see below). This flag is only compatible with `cv=None` (i.e. using Leave-One-Out Cross-Validation).
**alpha\_per\_target**bool, default=False
Flag indicating whether to optimize the alpha value (picked from the `alphas` parameter list) for each target separately (for multi-output settings: multiple prediction targets). When set to `True`, after fitting, the `alpha_` attribute will contain a value for each target. When set to `False`, a single alpha is used for all targets.
New in version 0.24.
Attributes:
**cv\_values\_**ndarray of shape (n\_samples, n\_alphas) or shape (n\_samples, n\_targets, n\_alphas), optional
Cross-validation values for each alpha (only available if `store_cv_values=True` and `cv=None`). After `fit()` has been called, this attribute will contain the mean squared errors if `scoring is None` otherwise it will contain standardized per point prediction values.
**coef\_**ndarray of shape (n\_features) or (n\_targets, n\_features)
Weight vector(s).
**intercept\_**float or ndarray of shape (n\_targets,)
Independent term in decision function. Set to 0.0 if `fit_intercept = False`.
**alpha\_**float or ndarray of shape (n\_targets,)
Estimated regularization parameter, or, if `alpha_per_target=True`, the estimated regularization parameter for each target.
**best\_score\_**float or ndarray of shape (n\_targets,)
Score of base estimator with best alpha, or, if `alpha_per_target=True`, a score for each target.
New in version 0.23.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`Ridge`](sklearn.linear_model.ridge#sklearn.linear_model.Ridge "sklearn.linear_model.Ridge")
Ridge regression.
[`RidgeClassifier`](sklearn.linear_model.ridgeclassifier#sklearn.linear_model.RidgeClassifier "sklearn.linear_model.RidgeClassifier")
Classifier based on ridge regression on {-1, 1} labels.
[`RidgeClassifierCV`](sklearn.linear_model.ridgeclassifiercv#sklearn.linear_model.RidgeClassifierCV "sklearn.linear_model.RidgeClassifierCV")
Ridge classifier with built-in cross validation.
#### Examples
```
>>> from sklearn.datasets import load_diabetes
>>> from sklearn.linear_model import RidgeCV
>>> X, y = load_diabetes(return_X_y=True)
>>> clf = RidgeCV(alphas=[1e-3, 1e-2, 1e-1, 1]).fit(X, y)
>>> clf.score(X, y)
0.5166...
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.linear_model.RidgeCV.fit "sklearn.linear_model.RidgeCV.fit")(X, y[, sample\_weight]) | Fit Ridge regression model with cv. |
| [`get_params`](#sklearn.linear_model.RidgeCV.get_params "sklearn.linear_model.RidgeCV.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.linear_model.RidgeCV.predict "sklearn.linear_model.RidgeCV.predict")(X) | Predict using the linear model. |
| [`score`](#sklearn.linear_model.RidgeCV.score "sklearn.linear_model.RidgeCV.score")(X, y[, sample\_weight]) | Return the coefficient of determination of the prediction. |
| [`set_params`](#sklearn.linear_model.RidgeCV.set_params "sklearn.linear_model.RidgeCV.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_ridge.py#L2107)
Fit Ridge regression model with cv.
Parameters:
**X**ndarray of shape (n\_samples, n\_features)
Training data. If using GCV, will be cast to float64 if necessary.
**y**ndarray of shape (n\_samples,) or (n\_samples, n\_targets)
Target values. Will be cast to X’s dtype if necessary.
**sample\_weight**float or ndarray of shape (n\_samples,), default=None
Individual weights for each sample. If given a float, every sample will have the same weight.
Returns:
**self**object
Fitted estimator.
#### Notes
When sample\_weight is provided, the selected hyperparameter may depend on whether we use leave-one-out cross-validation (cv=None or cv=’auto’) or another form of cross-validation, because only leave-one-out cross-validation takes the sample weights into account when computing the validation score.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L372)
Predict using the linear model.
Parameters:
**X**array-like or sparse matrix, shape (n\_samples, n\_features)
Samples.
Returns:
**C**array, shape (n\_samples,)
Returns predicted values.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L677)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares `((y_true - y_pred)** 2).sum()` and \(v\) is the total sum of squares `((y_true - y_true.mean()) ** 2).sum()`. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of `y`, disregarding the input features, would get a \(R^2\) score of 0.0.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape `(n_samples, n_samples_fitted)`, where `n_samples_fitted` is the number of samples used in the fitting for the estimator.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True values for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
\(R^2\) of `self.predict(X)` wrt. `y`.
#### Notes
The \(R^2\) score used when calling `score` on a regressor uses `multioutput='uniform_average'` from version 0.23 to keep consistent with default value of [`r2_score`](sklearn.metrics.r2_score#sklearn.metrics.r2_score "sklearn.metrics.r2_score"). This influences the `score` method of all the multioutput regressors (except for [`MultiOutputRegressor`](sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor")).
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.linear_model.RidgeCV`
---------------------------------------------
[Combine predictors using stacking](../../auto_examples/ensemble/plot_stack_predictors#sphx-glr-auto-examples-ensemble-plot-stack-predictors-py)
[Time-related feature engineering](../../auto_examples/applications/plot_cyclical_feature_engineering#sphx-glr-auto-examples-applications-plot-cyclical-feature-engineering-py)
[Model-based and sequential feature selection](../../auto_examples/feature_selection/plot_select_from_model_diabetes#sphx-glr-auto-examples-feature-selection-plot-select-from-model-diabetes-py)
[Common pitfalls in the interpretation of coefficients of linear models](../../auto_examples/inspection/plot_linear_model_coefficient_interpretation#sphx-glr-auto-examples-inspection-plot-linear-model-coefficient-interpretation-py)
[Face completion with a multi-output estimators](../../auto_examples/miscellaneous/plot_multioutput_face_completion#sphx-glr-auto-examples-miscellaneous-plot-multioutput-face-completion-py)
[Effect of transforming the targets in regression model](../../auto_examples/compose/plot_transformed_target#sphx-glr-auto-examples-compose-plot-transformed-target-py)
| programming_docs |
scikit_learn sklearn.covariance.oas sklearn.covariance.oas
======================
sklearn.covariance.oas(*X*, *\**, *assume\_centered=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_shrunk_covariance.py#L488)
Estimate covariance with the Oracle Approximating Shrinkage algorithm.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Data from which to compute the covariance estimate.
**assume\_centered**bool, default=False
If True, data will not be centered before computation. Useful to work with data whose mean is significantly equal to zero but is not exactly zero. If False, data will be centered before computation.
Returns:
**shrunk\_cov**array-like of shape (n\_features, n\_features)
Shrunk covariance.
**shrinkage**float
Coefficient in the convex combination used for the computation of the shrunk estimate.
#### Notes
The regularised (shrunk) covariance is:
(1 - shrinkage) \* cov + shrinkage \* mu \* np.identity(n\_features)
where mu = trace(cov) / n\_features
The formula we used to implement the OAS is slightly modified compared to the one given in the article. See [`OAS`](sklearn.covariance.oas#sklearn.covariance.OAS "sklearn.covariance.OAS") for more details.
scikit_learn sklearn.ensemble.HistGradientBoostingClassifier sklearn.ensemble.HistGradientBoostingClassifier
===============================================
*class*sklearn.ensemble.HistGradientBoostingClassifier(*loss='log\_loss'*, *\**, *learning\_rate=0.1*, *max\_iter=100*, *max\_leaf\_nodes=31*, *max\_depth=None*, *min\_samples\_leaf=20*, *l2\_regularization=0.0*, *max\_bins=255*, *categorical\_features=None*, *monotonic\_cst=None*, *warm\_start=False*, *early\_stopping='auto'*, *scoring='loss'*, *validation\_fraction=0.1*, *n\_iter\_no\_change=10*, *tol=1e-07*, *verbose=0*, *random\_state=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py#L1436)
Histogram-based Gradient Boosting Classification Tree.
This estimator is much faster than [`GradientBoostingClassifier`](sklearn.ensemble.gradientboostingclassifier#sklearn.ensemble.GradientBoostingClassifier "sklearn.ensemble.GradientBoostingClassifier") for big datasets (n\_samples >= 10 000).
This estimator has native support for missing values (NaNs). During training, the tree grower learns at each split point whether samples with missing values should go to the left or right child, based on the potential gain. When predicting, samples with missing values are assigned to the left or right child consequently. If no missing values were encountered for a given feature during training, then samples with missing values are mapped to whichever child has the most samples.
This implementation is inspired by [LightGBM](https://github.com/Microsoft/LightGBM).
Read more in the [User Guide](../ensemble#histogram-based-gradient-boosting).
New in version 0.21.
Parameters:
**loss**{‘log\_loss’, ‘auto’, ‘binary\_crossentropy’, ‘categorical\_crossentropy’}, default=’log\_loss’
The loss function to use in the boosting process.
For binary classification problems, ‘log\_loss’ is also known as logistic loss, binomial deviance or binary crossentropy. Internally, the model fits one tree per boosting iteration and uses the logistic sigmoid function (expit) as inverse link function to compute the predicted positive class probability.
For multiclass classification problems, ‘log\_loss’ is also known as multinomial deviance or categorical crossentropy. Internally, the model fits one tree per boosting iteration and per class and uses the softmax function as inverse link function to compute the predicted probabilities of the classes.
Deprecated since version 1.1: The loss arguments ‘auto’, ‘binary\_crossentropy’ and ‘categorical\_crossentropy’ were deprecated in v1.1 and will be removed in version 1.3. Use `loss='log_loss'` which is equivalent.
**learning\_rate**float, default=0.1
The learning rate, also known as *shrinkage*. This is used as a multiplicative factor for the leaves values. Use `1` for no shrinkage.
**max\_iter**int, default=100
The maximum number of iterations of the boosting process, i.e. the maximum number of trees for binary classification. For multiclass classification, `n_classes` trees per iteration are built.
**max\_leaf\_nodes**int or None, default=31
The maximum number of leaves for each tree. Must be strictly greater than 1. If None, there is no maximum limit.
**max\_depth**int or None, default=None
The maximum depth of each tree. The depth of a tree is the number of edges to go from the root to the deepest leaf. Depth isn’t constrained by default.
**min\_samples\_leaf**int, default=20
The minimum number of samples per leaf. For small datasets with less than a few hundred samples, it is recommended to lower this value since only very shallow trees would be built.
**l2\_regularization**float, default=0
The L2 regularization parameter. Use 0 for no regularization.
**max\_bins**int, default=255
The maximum number of bins to use for non-missing values. Before training, each feature of the input array `X` is binned into integer-valued bins, which allows for a much faster training stage. Features with a small number of unique values may use less than `max_bins` bins. In addition to the `max_bins` bins, one more bin is always reserved for missing values. Must be no larger than 255.
**categorical\_features**array-like of {bool, int} of shape (n\_features) or shape (n\_categorical\_features,), default=None
Indicates the categorical features.
* None : no feature will be considered categorical.
* boolean array-like : boolean mask indicating categorical features.
* integer array-like : integer indices indicating categorical features.
For each categorical feature, there must be at most `max_bins` unique categories, and each categorical value must be in [0, max\_bins -1].
Read more in the [User Guide](../ensemble#categorical-support-gbdt).
New in version 0.24.
**monotonic\_cst**array-like of int of shape (n\_features), default=None
Indicates the monotonic constraint to enforce on each feature. -1, 1 and 0 respectively correspond to a negative constraint, positive constraint and no constraint. Read more in the [User Guide](../ensemble#monotonic-cst-gbdt).
New in version 0.23.
**warm\_start**bool, default=False
When set to `True`, reuse the solution of the previous call to fit and add more estimators to the ensemble. For results to be valid, the estimator should be re-trained on the same data only. See [the Glossary](https://scikit-learn.org/1.1/glossary.html#term-warm_start).
**early\_stopping**‘auto’ or bool, default=’auto’
If ‘auto’, early stopping is enabled if the sample size is larger than 10000. If True, early stopping is enabled, otherwise early stopping is disabled.
New in version 0.23.
**scoring**str or callable or None, default=’loss’
Scoring parameter to use for early stopping. It can be a single string (see [The scoring parameter: defining model evaluation rules](../model_evaluation#scoring-parameter)) or a callable (see [Defining your scoring strategy from metric functions](../model_evaluation#scoring)). If None, the estimator’s default scorer is used. If `scoring='loss'`, early stopping is checked w.r.t the loss value. Only used if early stopping is performed.
**validation\_fraction**int or float or None, default=0.1
Proportion (or absolute size) of training data to set aside as validation data for early stopping. If None, early stopping is done on the training data. Only used if early stopping is performed.
**n\_iter\_no\_change**int, default=10
Used to determine when to “early stop”. The fitting process is stopped when none of the last `n_iter_no_change` scores are better than the `n_iter_no_change - 1` -th-to-last one, up to some tolerance. Only used if early stopping is performed.
**tol**float, default=1e-7
The absolute tolerance to use when comparing scores. The higher the tolerance, the more likely we are to early stop: higher tolerance means that it will be harder for subsequent iterations to be considered an improvement upon the reference score.
**verbose**int, default=0
The verbosity level. If not zero, print some information about the fitting process.
**random\_state**int, RandomState instance or None, default=None
Pseudo-random number generator to control the subsampling in the binning process, and the train/validation data split if early stopping is enabled. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
Attributes:
**classes\_**array, shape = (n\_classes,)
Class labels.
**do\_early\_stopping\_**bool
Indicates whether early stopping is used during training.
[`n_iter_`](#sklearn.ensemble.HistGradientBoostingClassifier.n_iter_ "sklearn.ensemble.HistGradientBoostingClassifier.n_iter_")int
Number of iterations of the boosting process.
**n\_trees\_per\_iteration\_**int
The number of tree that are built at each iteration. This is equal to 1 for binary classification, and to `n_classes` for multiclass classification.
**train\_score\_**ndarray, shape (n\_iter\_+1,)
The scores at each iteration on the training data. The first entry is the score of the ensemble before the first iteration. Scores are computed according to the `scoring` parameter. If `scoring` is not ‘loss’, scores are computed on a subset of at most 10 000 samples. Empty if no early stopping.
**validation\_score\_**ndarray, shape (n\_iter\_+1,)
The scores at each iteration on the held-out validation data. The first entry is the score of the ensemble before the first iteration. Scores are computed according to the `scoring` parameter. Empty if no early stopping or if `validation_fraction` is None.
**is\_categorical\_**ndarray, shape (n\_features, ) or None
Boolean mask for the categorical features. `None` if there are no categorical features.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`GradientBoostingClassifier`](sklearn.ensemble.gradientboostingclassifier#sklearn.ensemble.GradientBoostingClassifier "sklearn.ensemble.GradientBoostingClassifier")
Exact gradient boosting method that does not scale as good on datasets with a large number of samples.
[`sklearn.tree.DecisionTreeClassifier`](sklearn.tree.decisiontreeclassifier#sklearn.tree.DecisionTreeClassifier "sklearn.tree.DecisionTreeClassifier")
A decision tree classifier.
[`RandomForestClassifier`](sklearn.ensemble.randomforestclassifier#sklearn.ensemble.RandomForestClassifier "sklearn.ensemble.RandomForestClassifier")
A meta-estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting.
[`AdaBoostClassifier`](sklearn.ensemble.adaboostclassifier#sklearn.ensemble.AdaBoostClassifier "sklearn.ensemble.AdaBoostClassifier")
A meta-estimator that begins by fitting a classifier on the original dataset and then fits additional copies of the classifier on the same dataset where the weights of incorrectly classified instances are adjusted such that subsequent classifiers focus more on difficult cases.
#### Examples
```
>>> from sklearn.ensemble import HistGradientBoostingClassifier
>>> from sklearn.datasets import load_iris
>>> X, y = load_iris(return_X_y=True)
>>> clf = HistGradientBoostingClassifier().fit(X, y)
>>> clf.score(X, y)
1.0
```
#### Methods
| | |
| --- | --- |
| [`decision_function`](#sklearn.ensemble.HistGradientBoostingClassifier.decision_function "sklearn.ensemble.HistGradientBoostingClassifier.decision_function")(X) | Compute the decision function of `X`. |
| [`fit`](#sklearn.ensemble.HistGradientBoostingClassifier.fit "sklearn.ensemble.HistGradientBoostingClassifier.fit")(X, y[, sample\_weight]) | Fit the gradient boosting model. |
| [`get_params`](#sklearn.ensemble.HistGradientBoostingClassifier.get_params "sklearn.ensemble.HistGradientBoostingClassifier.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.ensemble.HistGradientBoostingClassifier.predict "sklearn.ensemble.HistGradientBoostingClassifier.predict")(X) | Predict classes for X. |
| [`predict_proba`](#sklearn.ensemble.HistGradientBoostingClassifier.predict_proba "sklearn.ensemble.HistGradientBoostingClassifier.predict_proba")(X) | Predict class probabilities for X. |
| [`score`](#sklearn.ensemble.HistGradientBoostingClassifier.score "sklearn.ensemble.HistGradientBoostingClassifier.score")(X, y[, sample\_weight]) | Return the mean accuracy on the given test data and labels. |
| [`set_params`](#sklearn.ensemble.HistGradientBoostingClassifier.set_params "sklearn.ensemble.HistGradientBoostingClassifier.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`staged_decision_function`](#sklearn.ensemble.HistGradientBoostingClassifier.staged_decision_function "sklearn.ensemble.HistGradientBoostingClassifier.staged_decision_function")(X) | Compute decision function of `X` for each iteration. |
| [`staged_predict`](#sklearn.ensemble.HistGradientBoostingClassifier.staged_predict "sklearn.ensemble.HistGradientBoostingClassifier.staged_predict")(X) | Predict classes at each iteration. |
| [`staged_predict_proba`](#sklearn.ensemble.HistGradientBoostingClassifier.staged_predict_proba "sklearn.ensemble.HistGradientBoostingClassifier.staged_predict_proba")(X) | Predict class probabilities at each iteration. |
decision\_function(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py#L1760)
Compute the decision function of `X`.
Parameters:
**X**array-like, shape (n\_samples, n\_features)
The input samples.
Returns:
**decision**ndarray, shape (n\_samples,) or (n\_samples, n\_trees\_per\_iteration)
The raw predicted values (i.e. the sum of the trees leaves) for each sample. n\_trees\_per\_iteration is equal to the number of classes in multiclass classification.
fit(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py#L260)
Fit the gradient boosting model.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The input samples.
**y**array-like of shape (n\_samples,)
Target values.
**sample\_weight**array-like of shape (n\_samples,) default=None
Weights of training data.
New in version 0.23.
Returns:
**self**object
Fitted estimator.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
*property*n\_iter\_
Number of iterations of the boosting process.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py#L1685)
Predict classes for X.
Parameters:
**X**array-like, shape (n\_samples, n\_features)
The input samples.
Returns:
**y**ndarray, shape (n\_samples,)
The predicted classes.
predict\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py#L1724)
Predict class probabilities for X.
Parameters:
**X**array-like, shape (n\_samples, n\_features)
The input samples.
Returns:
**p**ndarray, shape (n\_samples, n\_classes)
The class probabilities of the input samples.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L640)
Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True labels for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
Mean accuracy of `self.predict(X)` wrt. `y`.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
staged\_decision\_function(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py#L1781)
Compute decision function of `X` for each iteration.
This method allows monitoring (i.e. determine error on testing set) after each stage.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The input samples.
Yields:
**decision**generator of ndarray of shape (n\_samples,) or (n\_samples, n\_trees\_per\_iteration)
The decision function of the input samples, which corresponds to the raw values predicted from the trees of the ensemble . The classes corresponds to that in the attribute [classes\_](https://scikit-learn.org/1.1/glossary.html#term-classes_).
staged\_predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py#L1702)
Predict classes at each iteration.
This method allows monitoring (i.e. determine error on testing set) after each stage.
New in version 0.24.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The input samples.
Yields:
**y**generator of ndarray of shape (n\_samples,)
The predicted classes of the input samples, for each iteration.
staged\_predict\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py#L1740)
Predict class probabilities at each iteration.
This method allows monitoring (i.e. determine error on testing set) after each stage.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The input samples.
Yields:
**y**generator of ndarray of shape (n\_samples,)
The predicted class probabilities of the input samples, for each iteration.
Examples using `sklearn.ensemble.HistGradientBoostingClassifier`
----------------------------------------------------------------
[Release Highlights for scikit-learn 0.24](../../auto_examples/release_highlights/plot_release_highlights_0_24_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-0-24-0-py)
[Release Highlights for scikit-learn 0.23](../../auto_examples/release_highlights/plot_release_highlights_0_23_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-0-23-0-py)
[Release Highlights for scikit-learn 0.22](../../auto_examples/release_highlights/plot_release_highlights_0_22_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-0-22-0-py)
[Feature transformations with ensembles of trees](../../auto_examples/ensemble/plot_feature_transformation#sphx-glr-auto-examples-ensemble-plot-feature-transformation-py)
| programming_docs |
scikit_learn sklearn.linear_model.LassoCV sklearn.linear\_model.LassoCV
=============================
*class*sklearn.linear\_model.LassoCV(*\**, *eps=0.001*, *n\_alphas=100*, *alphas=None*, *fit\_intercept=True*, *normalize='deprecated'*, *precompute='auto'*, *max\_iter=1000*, *tol=0.0001*, *copy\_X=True*, *cv=None*, *verbose=False*, *n\_jobs=None*, *positive=False*, *random\_state=None*, *selection='cyclic'*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_coordinate_descent.py#L1794)
Lasso linear model with iterative fitting along a regularization path.
See glossary entry for [cross-validation estimator](https://scikit-learn.org/1.1/glossary.html#term-cross-validation-estimator).
The best model is selected by cross-validation.
The optimization objective for Lasso is:
```
(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
```
Read more in the [User Guide](../linear_model#lasso).
Parameters:
**eps**float, default=1e-3
Length of the path. `eps=1e-3` means that `alpha_min / alpha_max = 1e-3`.
**n\_alphas**int, default=100
Number of alphas along the regularization path.
**alphas**ndarray, default=None
List of alphas where to compute the models. If `None` alphas are set automatically.
**fit\_intercept**bool, default=True
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
**normalize**bool, default=False
This parameter is ignored when `fit_intercept` is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use [`StandardScaler`](sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler") before calling `fit` on an estimator with `normalize=False`.
Deprecated since version 1.0: `normalize` was deprecated in version 1.0 and will be removed in 1.2.
**precompute**‘auto’, bool or array-like of shape (n\_features, n\_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to `'auto'` let us decide. The Gram matrix can also be passed as argument.
**max\_iter**int, default=1000
The maximum number of iterations.
**tol**float, default=1e-4
The tolerance for the optimization: if the updates are smaller than `tol`, the optimization code checks the dual gap for optimality and continues until it is smaller than `tol`.
**copy\_X**bool, default=True
If `True`, X will be copied; else, it may be overwritten.
**cv**int, cross-validation generator or iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are:
* None, to use the default 5-fold cross-validation,
* int, to specify the number of folds.
* [CV splitter](https://scikit-learn.org/1.1/glossary.html#term-CV-splitter),
* An iterable yielding (train, test) splits as arrays of indices.
For int/None inputs, `KFold` is used.
Refer [User Guide](../cross_validation#cross-validation) for the various cross-validation strategies that can be used here.
Changed in version 0.22: `cv` default value if None changed from 3-fold to 5-fold.
**verbose**bool or int, default=False
Amount of verbosity.
**n\_jobs**int, default=None
Number of CPUs to use during the cross validation. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
**positive**bool, default=False
If positive, restrict regression coefficients to be positive.
**random\_state**int, RandomState instance, default=None
The seed of the pseudo random number generator that selects a random feature to update. Used when `selection` == ‘random’. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**selection**{‘cyclic’, ‘random’}, default=’cyclic’
If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4.
Attributes:
**alpha\_**float
The amount of penalization chosen by cross validation.
**coef\_**ndarray of shape (n\_features,) or (n\_targets, n\_features)
Parameter vector (w in the cost function formula).
**intercept\_**float or ndarray of shape (n\_targets,)
Independent term in decision function.
**mse\_path\_**ndarray of shape (n\_alphas, n\_folds)
Mean square error for the test set on each fold, varying alpha.
**alphas\_**ndarray of shape (n\_alphas,)
The grid of alphas used for fitting.
**dual\_gap\_**float or ndarray of shape (n\_targets,)
The dual gap at the end of the optimization for the optimal alpha (`alpha_`).
**n\_iter\_**int
Number of iterations run by the coordinate descent solver to reach the specified tolerance for the optimal alpha.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`lars_path`](sklearn.linear_model.lars_path#sklearn.linear_model.lars_path "sklearn.linear_model.lars_path")
Compute Least Angle Regression or Lasso path using LARS algorithm.
[`lasso_path`](sklearn.linear_model.lasso_path#sklearn.linear_model.lasso_path "sklearn.linear_model.lasso_path")
Compute Lasso path with coordinate descent.
[`Lasso`](sklearn.linear_model.lasso#sklearn.linear_model.Lasso "sklearn.linear_model.Lasso")
The Lasso is a linear model that estimates sparse coefficients.
[`LassoLars`](sklearn.linear_model.lassolars#sklearn.linear_model.LassoLars "sklearn.linear_model.LassoLars")
Lasso model fit with Least Angle Regression a.k.a. Lars.
[`LassoCV`](#sklearn.linear_model.LassoCV "sklearn.linear_model.LassoCV")
Lasso linear model with iterative fitting along a regularization path.
[`LassoLarsCV`](sklearn.linear_model.lassolarscv#sklearn.linear_model.LassoLarsCV "sklearn.linear_model.LassoLarsCV")
Cross-validated Lasso using the LARS algorithm.
#### Notes
In `fit`, once the best parameter `alpha` is found through cross-validation, the model is fit again using the entire training set.
To avoid unnecessary memory duplication the `X` argument of the `fit` method should be directly passed as a Fortran-contiguous numpy array.
For an example, see [examples/linear\_model/plot\_lasso\_model\_selection.py](../../auto_examples/linear_model/plot_lasso_model_selection#sphx-glr-auto-examples-linear-model-plot-lasso-model-selection-py).
#### Examples
```
>>> from sklearn.linear_model import LassoCV
>>> from sklearn.datasets import make_regression
>>> X, y = make_regression(noise=4, random_state=0)
>>> reg = LassoCV(cv=5, random_state=0).fit(X, y)
>>> reg.score(X, y)
0.9993...
>>> reg.predict(X[:1,])
array([-78.4951...])
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.linear_model.LassoCV.fit "sklearn.linear_model.LassoCV.fit")(X, y[, sample\_weight]) | Fit linear model with coordinate descent. |
| [`get_params`](#sklearn.linear_model.LassoCV.get_params "sklearn.linear_model.LassoCV.get_params")([deep]) | Get parameters for this estimator. |
| [`path`](#sklearn.linear_model.LassoCV.path "sklearn.linear_model.LassoCV.path")(X, y, \*[, eps, n\_alphas, alphas, ...]) | Compute Lasso path with coordinate descent. |
| [`predict`](#sklearn.linear_model.LassoCV.predict "sklearn.linear_model.LassoCV.predict")(X) | Predict using the linear model. |
| [`score`](#sklearn.linear_model.LassoCV.score "sklearn.linear_model.LassoCV.score")(X, y[, sample\_weight]) | Return the coefficient of determination of the prediction. |
| [`set_params`](#sklearn.linear_model.LassoCV.set_params "sklearn.linear_model.LassoCV.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_coordinate_descent.py#L1521)
Fit linear model with coordinate descent.
Fit is on grid of alphas and best alpha estimated by cross-validation.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output, X can be sparse.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_targets)
Target values.
**sample\_weight**float or array-like of shape (n\_samples,), default=None
Sample weights used for fitting and evaluation of the weighted mean squared error of each cv-fold. Note that the cross validated MSE that is finally used to find the best model is the unweighted mean over the (weighted) MSEs of each test fold.
Returns:
**self**object
Returns an instance of fitted model.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
*static*path(*X*, *y*, *\**, *eps=0.001*, *n\_alphas=100*, *alphas=None*, *precompute='auto'*, *Xy=None*, *copy\_X=True*, *coef\_init=None*, *verbose=False*, *return\_n\_iter=False*, *positive=False*, *\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_coordinate_descent.py#L192)
Compute Lasso path with coordinate descent.
The Lasso optimization function varies for mono and multi-outputs.
For mono-output tasks it is:
```
(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
```
For multi-output tasks it is:
```
(1 / (2 * n_samples)) * ||Y - XW||^2_Fro + alpha * ||W||_21
```
Where:
```
||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
```
i.e. the sum of norm of each row.
Read more in the [User Guide](../linear_model#lasso).
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If `y` is mono-output then `X` can be sparse.
**y**{array-like, sparse matrix} of shape (n\_samples,) or (n\_samples, n\_targets)
Target values.
**eps**float, default=1e-3
Length of the path. `eps=1e-3` means that `alpha_min / alpha_max = 1e-3`.
**n\_alphas**int, default=100
Number of alphas along the regularization path.
**alphas**ndarray, default=None
List of alphas where to compute the models. If `None` alphas are set automatically.
**precompute**‘auto’, bool or array-like of shape (n\_features, n\_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to `'auto'` let us decide. The Gram matrix can also be passed as argument.
**Xy**array-like of shape (n\_features,) or (n\_features, n\_targets), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
**copy\_X**bool, default=True
If `True`, X will be copied; else, it may be overwritten.
**coef\_init**ndarray of shape (n\_features, ), default=None
The initial values of the coefficients.
**verbose**bool or int, default=False
Amount of verbosity.
**return\_n\_iter**bool, default=False
Whether to return the number of iterations or not.
**positive**bool, default=False
If set to True, forces coefficients to be positive. (Only allowed when `y.ndim == 1`).
**\*\*params**kwargs
Keyword arguments passed to the coordinate descent solver.
Returns:
**alphas**ndarray of shape (n\_alphas,)
The alphas along the path where models are computed.
**coefs**ndarray of shape (n\_features, n\_alphas) or (n\_targets, n\_features, n\_alphas)
Coefficients along the path.
**dual\_gaps**ndarray of shape (n\_alphas,)
The dual gaps at the end of the optimization for each alpha.
**n\_iters**list of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha.
See also
[`lars_path`](sklearn.linear_model.lars_path#sklearn.linear_model.lars_path "sklearn.linear_model.lars_path")
Compute Least Angle Regression or Lasso path using LARS algorithm.
[`Lasso`](sklearn.linear_model.lasso#sklearn.linear_model.Lasso "sklearn.linear_model.Lasso")
The Lasso is a linear model that estimates sparse coefficients.
[`LassoLars`](sklearn.linear_model.lassolars#sklearn.linear_model.LassoLars "sklearn.linear_model.LassoLars")
Lasso model fit with Least Angle Regression a.k.a. Lars.
[`LassoCV`](#sklearn.linear_model.LassoCV "sklearn.linear_model.LassoCV")
Lasso linear model with iterative fitting along a regularization path.
[`LassoLarsCV`](sklearn.linear_model.lassolarscv#sklearn.linear_model.LassoLarsCV "sklearn.linear_model.LassoLarsCV")
Cross-validated Lasso using the LARS algorithm.
[`sklearn.decomposition.sparse_encode`](sklearn.decomposition.sparse_encode#sklearn.decomposition.sparse_encode "sklearn.decomposition.sparse_encode")
Estimator that can be used to transform signals into sparse linear combination of atoms from a fixed.
#### Notes
For an example, see [examples/linear\_model/plot\_lasso\_coordinate\_descent\_path.py](../../auto_examples/linear_model/plot_lasso_coordinate_descent_path#sphx-glr-auto-examples-linear-model-plot-lasso-coordinate-descent-path-py).
To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array.
Note that in certain cases, the Lars solver may be significantly faster to implement this functionality. In particular, linear interpolation can be used to retrieve model coefficients between the values output by lars\_path
#### Examples
Comparing lasso\_path and lars\_path with interpolation:
```
>>> import numpy as np
>>> from sklearn.linear_model import lasso_path
>>> X = np.array([[1, 2, 3.1], [2.3, 5.4, 4.3]]).T
>>> y = np.array([1, 2, 3.1])
>>> # Use lasso_path to compute a coefficient path
>>> _, coef_path, _ = lasso_path(X, y, alphas=[5., 1., .5])
>>> print(coef_path)
[[0. 0. 0.46874778]
[0.2159048 0.4425765 0.23689075]]
```
```
>>> # Now use lars_path and 1D linear interpolation to compute the
>>> # same path
>>> from sklearn.linear_model import lars_path
>>> alphas, active, coef_path_lars = lars_path(X, y, method='lasso')
>>> from scipy import interpolate
>>> coef_path_continuous = interpolate.interp1d(alphas[::-1],
... coef_path_lars[:, ::-1])
>>> print(coef_path_continuous([5., 1., .5]))
[[0. 0. 0.46915237]
[0.2159048 0.4425765 0.23668876]]
```
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L372)
Predict using the linear model.
Parameters:
**X**array-like or sparse matrix, shape (n\_samples, n\_features)
Samples.
Returns:
**C**array, shape (n\_samples,)
Returns predicted values.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L677)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares `((y_true - y_pred)** 2).sum()` and \(v\) is the total sum of squares `((y_true - y_true.mean()) ** 2).sum()`. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of `y`, disregarding the input features, would get a \(R^2\) score of 0.0.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape `(n_samples, n_samples_fitted)`, where `n_samples_fitted` is the number of samples used in the fitting for the estimator.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True values for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
\(R^2\) of `self.predict(X)` wrt. `y`.
#### Notes
The \(R^2\) score used when calling `score` on a regressor uses `multioutput='uniform_average'` from version 0.23 to keep consistent with default value of [`r2_score`](sklearn.metrics.r2_score#sklearn.metrics.r2_score "sklearn.metrics.r2_score"). This influences the `score` method of all the multioutput regressors (except for [`MultiOutputRegressor`](sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor")).
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.linear_model.LassoCV`
---------------------------------------------
[Combine predictors using stacking](../../auto_examples/ensemble/plot_stack_predictors#sphx-glr-auto-examples-ensemble-plot-stack-predictors-py)
[Lasso model selection: AIC-BIC / cross-validation](../../auto_examples/linear_model/plot_lasso_model_selection#sphx-glr-auto-examples-linear-model-plot-lasso-model-selection-py)
[Common pitfalls in the interpretation of coefficients of linear models](../../auto_examples/inspection/plot_linear_model_coefficient_interpretation#sphx-glr-auto-examples-inspection-plot-linear-model-coefficient-interpretation-py)
[Cross-validation on diabetes Dataset Exercise](../../auto_examples/exercises/plot_cv_diabetes#sphx-glr-auto-examples-exercises-plot-cv-diabetes-py)
scikit_learn sklearn.metrics.mutual_info_score sklearn.metrics.mutual\_info\_score
===================================
sklearn.metrics.mutual\_info\_score(*labels\_true*, *labels\_pred*, *\**, *contingency=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/cluster/_supervised.py#L723)
Mutual Information between two clusterings.
The Mutual Information is a measure of the similarity between two labels of the same data. Where \(|U\_i|\) is the number of the samples in cluster \(U\_i\) and \(|V\_j|\) is the number of the samples in cluster \(V\_j\), the Mutual Information between clusterings \(U\) and \(V\) is given as:
\[MI(U,V)=\sum\_{i=1}^{|U|} \sum\_{j=1}^{|V|} \frac{|U\_i\cap V\_j|}{N} \log\frac{N|U\_i \cap V\_j|}{|U\_i||V\_j|}\] This metric is independent of the absolute values of the labels: a permutation of the class or cluster label values won’t change the score value in any way.
This metric is furthermore symmetric: switching \(U\) (i.e `label_true`) with \(V\) (i.e. `label_pred`) will return the same score value. This can be useful to measure the agreement of two independent label assignments strategies on the same dataset when the real ground truth is not known.
Read more in the [User Guide](../clustering#mutual-info-score).
Parameters:
**labels\_true**int array, shape = [n\_samples]
A clustering of the data into disjoint subsets, called \(U\) in the above formula.
**labels\_pred**int array-like of shape (n\_samples,)
A clustering of the data into disjoint subsets, called \(V\) in the above formula.
**contingency**{ndarray, sparse matrix} of shape (n\_classes\_true, n\_classes\_pred), default=None
A contingency matrix given by the `contingency_matrix` function. If value is `None`, it will be computed, otherwise the given value is used, with `labels_true` and `labels_pred` ignored.
Returns:
**mi**float
Mutual information, a non-negative value, measured in nats using the natural logarithm.
See also
[`adjusted_mutual_info_score`](sklearn.metrics.adjusted_mutual_info_score#sklearn.metrics.adjusted_mutual_info_score "sklearn.metrics.adjusted_mutual_info_score")
Adjusted against chance Mutual Information.
[`normalized_mutual_info_score`](sklearn.metrics.normalized_mutual_info_score#sklearn.metrics.normalized_mutual_info_score "sklearn.metrics.normalized_mutual_info_score")
Normalized Mutual Information.
#### Notes
The logarithm used is the natural logarithm (base-e).
Examples using `sklearn.metrics.mutual_info_score`
--------------------------------------------------
[Adjustment for chance in clustering performance evaluation](../../auto_examples/cluster/plot_adjusted_for_chance_measures#sphx-glr-auto-examples-cluster-plot-adjusted-for-chance-measures-py)
| programming_docs |
scikit_learn sklearn.utils.check_consistent_length sklearn.utils.check\_consistent\_length
=======================================
sklearn.utils.check\_consistent\_length(*\*arrays*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/utils/validation.py#L373)
Check that all arrays have consistent first dimensions.
Checks whether all objects in arrays have the same shape or length.
Parameters:
**\*arrays**list or tuple of input objects.
Objects that will be checked for consistent length.
scikit_learn sklearn.svm.LinearSVC sklearn.svm.LinearSVC
=====================
*class*sklearn.svm.LinearSVC(*penalty='l2'*, *loss='squared\_hinge'*, *\**, *dual=True*, *tol=0.0001*, *C=1.0*, *multi\_class='ovr'*, *fit\_intercept=True*, *intercept\_scaling=1*, *class\_weight=None*, *verbose=0*, *random\_state=None*, *max\_iter=1000*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/svm/_classes.py#L11)
Linear Support Vector Classification.
Similar to SVC with parameter kernel=’linear’, but implemented in terms of liblinear rather than libsvm, so it has more flexibility in the choice of penalties and loss functions and should scale better to large numbers of samples.
This class supports both dense and sparse input and the multiclass support is handled according to a one-vs-the-rest scheme.
Read more in the [User Guide](../svm#svm-classification).
Parameters:
**penalty**{‘l1’, ‘l2’}, default=’l2’
Specifies the norm used in the penalization. The ‘l2’ penalty is the standard used in SVC. The ‘l1’ leads to `coef_` vectors that are sparse.
**loss**{‘hinge’, ‘squared\_hinge’}, default=’squared\_hinge’
Specifies the loss function. ‘hinge’ is the standard SVM loss (used e.g. by the SVC class) while ‘squared\_hinge’ is the square of the hinge loss. The combination of `penalty='l1'` and `loss='hinge'` is not supported.
**dual**bool, default=True
Select the algorithm to either solve the dual or primal optimization problem. Prefer dual=False when n\_samples > n\_features.
**tol**float, default=1e-4
Tolerance for stopping criteria.
**C**float, default=1.0
Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive.
**multi\_class**{‘ovr’, ‘crammer\_singer’}, default=’ovr’
Determines the multi-class strategy if `y` contains more than two classes. `"ovr"` trains n\_classes one-vs-rest classifiers, while `"crammer_singer"` optimizes a joint objective over all classes. While `crammer_singer` is interesting from a theoretical perspective as it is consistent, it is seldom used in practice as it rarely leads to better accuracy and is more expensive to compute. If `"crammer_singer"` is chosen, the options loss, penalty and dual will be ignored.
**fit\_intercept**bool, default=True
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be already centered).
**intercept\_scaling**float, default=1
When self.fit\_intercept is True, instance vector x becomes `[x, self.intercept_scaling]`, i.e. a “synthetic” feature with constant value equals to intercept\_scaling is appended to the instance vector. The intercept becomes intercept\_scaling \* synthetic feature weight Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept\_scaling has to be increased.
**class\_weight**dict or ‘balanced’, default=None
Set the parameter C of class i to `class_weight[i]*C` for SVC. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as `n_samples / (n_classes * np.bincount(y))`.
**verbose**int, default=0
Enable verbose output. Note that this setting takes advantage of a per-process runtime setting in liblinear that, if enabled, may not work properly in a multithreaded context.
**random\_state**int, RandomState instance or None, default=None
Controls the pseudo random number generation for shuffling the data for the dual coordinate descent (if `dual=True`). When `dual=False` the underlying implementation of [`LinearSVC`](#sklearn.svm.LinearSVC "sklearn.svm.LinearSVC") is not random and `random_state` has no effect on the results. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**max\_iter**int, default=1000
The maximum number of iterations to be run.
Attributes:
**coef\_**ndarray of shape (1, n\_features) if n\_classes == 2 else (n\_classes, n\_features)
Weights assigned to the features (coefficients in the primal problem).
`coef_` is a readonly property derived from `raw_coef_` that follows the internal memory layout of liblinear.
**intercept\_**ndarray of shape (1,) if n\_classes == 2 else (n\_classes,)
Constants in decision function.
**classes\_**ndarray of shape (n\_classes,)
The unique classes labels.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
**n\_iter\_**int
Maximum number of iterations run across all classes.
See also
[`SVC`](sklearn.svm.svc#sklearn.svm.SVC "sklearn.svm.SVC")
Implementation of Support Vector Machine classifier using libsvm: the kernel can be non-linear but its SMO algorithm does not scale to large number of samples as LinearSVC does. Furthermore SVC multi-class mode is implemented using one vs one scheme while LinearSVC uses one vs the rest. It is possible to implement one vs the rest with SVC by using the [`OneVsRestClassifier`](sklearn.multiclass.onevsrestclassifier#sklearn.multiclass.OneVsRestClassifier "sklearn.multiclass.OneVsRestClassifier") wrapper. Finally SVC can fit dense data without memory copy if the input is C-contiguous. Sparse data will still incur memory copy though.
[`sklearn.linear_model.SGDClassifier`](sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier "sklearn.linear_model.SGDClassifier")
SGDClassifier can optimize the same cost function as LinearSVC by adjusting the penalty and loss parameters. In addition it requires less memory, allows incremental (online) learning, and implements various loss functions and regularization regimes.
#### Notes
The underlying C implementation uses a random number generator to select features when fitting the model. It is thus not uncommon to have slightly different results for the same input data. If that happens, try with a smaller `tol` parameter.
The underlying implementation, liblinear, uses a sparse internal representation for the data that will incur a memory copy.
Predict output may not match that of standalone liblinear in certain cases. See [differences from liblinear](../linear_model#liblinear-differences) in the narrative documentation.
#### References
[LIBLINEAR: A Library for Large Linear Classification](https://www.csie.ntu.edu.tw/~cjlin/liblinear/)
#### Examples
```
>>> from sklearn.svm import LinearSVC
>>> from sklearn.pipeline import make_pipeline
>>> from sklearn.preprocessing import StandardScaler
>>> from sklearn.datasets import make_classification
>>> X, y = make_classification(n_features=4, random_state=0)
>>> clf = make_pipeline(StandardScaler(),
... LinearSVC(random_state=0, tol=1e-5))
>>> clf.fit(X, y)
Pipeline(steps=[('standardscaler', StandardScaler()),
('linearsvc', LinearSVC(random_state=0, tol=1e-05))])
```
```
>>> print(clf.named_steps['linearsvc'].coef_)
[[0.141... 0.526... 0.679... 0.493...]]
```
```
>>> print(clf.named_steps['linearsvc'].intercept_)
[0.1693...]
>>> print(clf.predict([[0, 0, 0, 0]]))
[1]
```
#### Methods
| | |
| --- | --- |
| [`decision_function`](#sklearn.svm.LinearSVC.decision_function "sklearn.svm.LinearSVC.decision_function")(X) | Predict confidence scores for samples. |
| [`densify`](#sklearn.svm.LinearSVC.densify "sklearn.svm.LinearSVC.densify")() | Convert coefficient matrix to dense array format. |
| [`fit`](#sklearn.svm.LinearSVC.fit "sklearn.svm.LinearSVC.fit")(X, y[, sample\_weight]) | Fit the model according to the given training data. |
| [`get_params`](#sklearn.svm.LinearSVC.get_params "sklearn.svm.LinearSVC.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.svm.LinearSVC.predict "sklearn.svm.LinearSVC.predict")(X) | Predict class labels for samples in X. |
| [`score`](#sklearn.svm.LinearSVC.score "sklearn.svm.LinearSVC.score")(X, y[, sample\_weight]) | Return the mean accuracy on the given test data and labels. |
| [`set_params`](#sklearn.svm.LinearSVC.set_params "sklearn.svm.LinearSVC.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`sparsify`](#sklearn.svm.LinearSVC.sparsify "sklearn.svm.LinearSVC.sparsify")() | Convert coefficient matrix to sparse format. |
decision\_function(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L408)
Predict confidence scores for samples.
The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The data matrix for which we want to get the confidence scores.
Returns:
**scores**ndarray of shape (n\_samples,) or (n\_samples, n\_classes)
Confidence scores per `(n_samples, n_classes)` combination. In the binary case, confidence score for `self.classes_[1]` where >0 means this class would be predicted.
densify()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L477)
Convert coefficient matrix to dense array format.
Converts the `coef_` member (back) to a numpy.ndarray. This is the default format of `coef_` and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op.
Returns:
self
Fitted estimator.
fit(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/svm/_classes.py#L219)
Fit the model according to the given training data.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Training vector, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**array-like of shape (n\_samples,)
Target vector relative to X.
**sample\_weight**array-like of shape (n\_samples,), default=None
Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight.
New in version 0.18.
Returns:
**self**object
An instance of the estimator.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L433)
Predict class labels for samples in X.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The data matrix for which we want to get the predictions.
Returns:
**y\_pred**ndarray of shape (n\_samples,)
Vector containing the class labels for each sample.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L640)
Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True labels for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
Mean accuracy of `self.predict(X)` wrt. `y`.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
sparsify()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L497)
Convert coefficient matrix to sparse format.
Converts the `coef_` member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation.
The `intercept_` member is not converted.
Returns:
self
Fitted estimator.
#### Notes
For non-sparse models, i.e. when there are not many zeros in `coef_`, this may actually *increase* memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with `(coef_ == 0).sum()`, must be more than 50% for this to provide significant benefits.
After calling this method, further fitting with the partial\_fit method (if any) will not work until you call densify.
Examples using `sklearn.svm.LinearSVC`
--------------------------------------
[Release Highlights for scikit-learn 0.22](../../auto_examples/release_highlights/plot_release_highlights_0_22_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-0-22-0-py)
[Comparison of Calibration of Classifiers](../../auto_examples/calibration/plot_compare_calibration#sphx-glr-auto-examples-calibration-plot-compare-calibration-py)
[Probability Calibration curves](../../auto_examples/calibration/plot_calibration_curve#sphx-glr-auto-examples-calibration-plot-calibration-curve-py)
[Pipeline ANOVA SVM](../../auto_examples/feature_selection/plot_feature_selection_pipeline#sphx-glr-auto-examples-feature-selection-plot-feature-selection-pipeline-py)
[Univariate Feature Selection](../../auto_examples/feature_selection/plot_feature_selection#sphx-glr-auto-examples-feature-selection-plot-feature-selection-py)
[Scalable learning with polynomial kernel approximation](../../auto_examples/kernel_approximation/plot_scalable_poly_kernels#sphx-glr-auto-examples-kernel-approximation-plot-scalable-poly-kernels-py)
[Explicit feature map approximation for RBF kernels](../../auto_examples/miscellaneous/plot_kernel_approximation#sphx-glr-auto-examples-miscellaneous-plot-kernel-approximation-py)
[Balance model complexity and cross-validated score](../../auto_examples/model_selection/plot_grid_search_refit_callable#sphx-glr-auto-examples-model-selection-plot-grid-search-refit-callable-py)
[Detection error tradeoff (DET) curve](../../auto_examples/model_selection/plot_det#sphx-glr-auto-examples-model-selection-plot-det-py)
[Precision-Recall](../../auto_examples/model_selection/plot_precision_recall#sphx-glr-auto-examples-model-selection-plot-precision-recall-py)
[Column Transformer with Heterogeneous Data Sources](../../auto_examples/compose/plot_column_transformer#sphx-glr-auto-examples-compose-plot-column-transformer-py)
[Selecting dimensionality reduction with Pipeline and GridSearchCV](../../auto_examples/compose/plot_compare_reduction#sphx-glr-auto-examples-compose-plot-compare-reduction-py)
[Feature discretization](../../auto_examples/preprocessing/plot_discretization_classification#sphx-glr-auto-examples-preprocessing-plot-discretization-classification-py)
[Plot different SVM classifiers in the iris dataset](../../auto_examples/svm/plot_iris_svc#sphx-glr-auto-examples-svm-plot-iris-svc-py)
[Plot the support vectors in LinearSVC](../../auto_examples/svm/plot_linearsvc_support_vectors#sphx-glr-auto-examples-svm-plot-linearsvc-support-vectors-py)
[Scaling the regularization parameter for SVCs](../../auto_examples/svm/plot_svm_scale_c#sphx-glr-auto-examples-svm-plot-svm-scale-c-py)
[Classification of text documents using sparse features](../../auto_examples/text/plot_document_classification_20newsgroups#sphx-glr-auto-examples-text-plot-document-classification-20newsgroups-py)
scikit_learn sklearn.model_selection.cross_val_score sklearn.model\_selection.cross\_val\_score
==========================================
sklearn.model\_selection.cross\_val\_score(*estimator*, *X*, *y=None*, *\**, *groups=None*, *scoring=None*, *cv=None*, *n\_jobs=None*, *verbose=0*, *fit\_params=None*, *pre\_dispatch='2\*n\_jobs'*, *error\_score=nan*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_validation.py#L381)
Evaluate a score by cross-validation.
Read more in the [User Guide](../cross_validation#cross-validation).
Parameters:
**estimator**estimator object implementing ‘fit’
The object to use to fit the data.
**X**array-like of shape (n\_samples, n\_features)
The data to fit. Can be for example a list, or an array.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs), default=None
The target variable to try to predict in the case of supervised learning.
**groups**array-like of shape (n\_samples,), default=None
Group labels for the samples used while splitting the dataset into train/test set. Only used in conjunction with a “Group” [cv](https://scikit-learn.org/1.1/glossary.html#term-cv) instance (e.g., [`GroupKFold`](sklearn.model_selection.groupkfold#sklearn.model_selection.GroupKFold "sklearn.model_selection.GroupKFold")).
**scoring**str or callable, default=None
A str (see model evaluation documentation) or a scorer callable object / function with signature `scorer(estimator, X, y)` which should return only a single value.
Similar to [`cross_validate`](sklearn.model_selection.cross_validate#sklearn.model_selection.cross_validate "sklearn.model_selection.cross_validate") but only a single metric is permitted.
If `None`, the estimator’s default scorer (if available) is used.
**cv**int, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are:
* `None`, to use the default 5-fold cross validation,
* int, to specify the number of folds in a `(Stratified)KFold`,
* [CV splitter](https://scikit-learn.org/1.1/glossary.html#term-CV-splitter),
* An iterable that generates (train, test) splits as arrays of indices.
For `int`/`None` inputs, if the estimator is a classifier and `y` is either binary or multiclass, [`StratifiedKFold`](sklearn.model_selection.stratifiedkfold#sklearn.model_selection.StratifiedKFold "sklearn.model_selection.StratifiedKFold") is used. In all other cases, [`KFold`](sklearn.model_selection.kfold#sklearn.model_selection.KFold "sklearn.model_selection.KFold") is used. These splitters are instantiated with `shuffle=False` so the splits will be the same across calls.
Refer [User Guide](../cross_validation#cross-validation) for the various cross-validation strategies that can be used here.
Changed in version 0.22: `cv` default value if `None` changed from 3-fold to 5-fold.
**n\_jobs**int, default=None
Number of jobs to run in parallel. Training the estimator and computing the score are parallelized over the cross-validation splits. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
**verbose**int, default=0
The verbosity level.
**fit\_params**dict, default=None
Parameters to pass to the fit method of the estimator.
**pre\_dispatch**int or str, default=’2\*n\_jobs’
Controls the number of jobs that get dispatched during parallel execution. Reducing this number can be useful to avoid an explosion of memory consumption when more jobs get dispatched than CPUs can process. This parameter can be:
* `None`, in which case all the jobs are immediately created and spawned. Use this for lightweight and fast-running jobs, to avoid delays due to on-demand spawning of the jobs
* An int, giving the exact number of total jobs that are spawned
* A str, giving an expression as a function of n\_jobs, as in ‘2\*n\_jobs’
**error\_score**‘raise’ or numeric, default=np.nan
Value to assign to the score if an error occurs in estimator fitting. If set to ‘raise’, the error is raised. If a numeric value is given, FitFailedWarning is raised.
New in version 0.20.
Returns:
**scores**ndarray of float of shape=(len(list(cv)),)
Array of scores of the estimator for each run of the cross validation.
See also
[`cross_validate`](sklearn.model_selection.cross_validate#sklearn.model_selection.cross_validate "sklearn.model_selection.cross_validate")
To run cross-validation on multiple metrics and also to return train scores, fit times and score times.
[`cross_val_predict`](sklearn.model_selection.cross_val_predict#sklearn.model_selection.cross_val_predict "sklearn.model_selection.cross_val_predict")
Get predictions from each split of cross-validation for diagnostic purposes.
[`sklearn.metrics.make_scorer`](sklearn.metrics.make_scorer#sklearn.metrics.make_scorer "sklearn.metrics.make_scorer")
Make a scorer from a performance metric or loss function.
#### Examples
```
>>> from sklearn import datasets, linear_model
>>> from sklearn.model_selection import cross_val_score
>>> diabetes = datasets.load_diabetes()
>>> X = diabetes.data[:150]
>>> y = diabetes.target[:150]
>>> lasso = linear_model.Lasso()
>>> print(cross_val_score(lasso, X, y, cv=3))
[0.3315057 0.08022103 0.03531816]
```
Examples using `sklearn.model_selection.cross_val_score`
--------------------------------------------------------
[Model selection with Probabilistic PCA and Factor Analysis (FA)](../../auto_examples/decomposition/plot_pca_vs_fa_model_selection#sphx-glr-auto-examples-decomposition-plot-pca-vs-fa-model-selection-py)
[Imputing missing values before building an estimator](../../auto_examples/impute/plot_missing_values#sphx-glr-auto-examples-impute-plot-missing-values-py)
[Imputing missing values with variants of IterativeImputer](../../auto_examples/impute/plot_iterative_imputer_variants_comparison#sphx-glr-auto-examples-impute-plot-iterative-imputer-variants-comparison-py)
[Nested versus non-nested cross-validation](../../auto_examples/model_selection/plot_nested_cross_validation_iris#sphx-glr-auto-examples-model-selection-plot-nested-cross-validation-iris-py)
[Receiver Operating Characteristic (ROC) with cross validation](../../auto_examples/model_selection/plot_roc_crossval#sphx-glr-auto-examples-model-selection-plot-roc-crossval-py)
[Underfitting vs. Overfitting](../../auto_examples/model_selection/plot_underfitting_overfitting#sphx-glr-auto-examples-model-selection-plot-underfitting-overfitting-py)
[SVM-Anova: SVM with univariate feature selection](../../auto_examples/svm/plot_svm_anova#sphx-glr-auto-examples-svm-plot-svm-anova-py)
[Cross-validation on Digits Dataset Exercise](../../auto_examples/exercises/plot_cv_digits#sphx-glr-auto-examples-exercises-plot-cv-digits-py)
| programming_docs |
scikit_learn sklearn.metrics.average_precision_score sklearn.metrics.average\_precision\_score
=========================================
sklearn.metrics.average\_precision\_score(*y\_true*, *y\_score*, *\**, *average='macro'*, *pos\_label=1*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/_ranking.py#L112)
Compute average precision (AP) from prediction scores.
AP summarizes a precision-recall curve as the weighted mean of precisions achieved at each threshold, with the increase in recall from the previous threshold used as the weight:
\[\text{AP} = \sum\_n (R\_n - R\_{n-1}) P\_n\] where \(P\_n\) and \(R\_n\) are the precision and recall at the nth threshold [[1]](#rcdf8f32d7f9d-1). This implementation is not interpolated and is different from computing the area under the precision-recall curve with the trapezoidal rule, which uses linear interpolation and can be too optimistic.
Note: this implementation is restricted to the binary classification task or multilabel classification task.
Read more in the [User Guide](../model_evaluation#precision-recall-f-measure-metrics).
Parameters:
**y\_true**ndarray of shape (n\_samples,) or (n\_samples, n\_classes)
True binary labels or binary label indicators.
**y\_score**ndarray of shape (n\_samples,) or (n\_samples, n\_classes)
Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by [decision\_function](https://scikit-learn.org/1.1/glossary.html#term-decision_function) on some classifiers).
**average**{‘micro’, ‘samples’, ‘weighted’, ‘macro’} or None, default=’macro’
If `None`, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data:
`'micro'`:
Calculate metrics globally by considering each element of the label indicator matrix as a label.
`'macro'`:
Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
`'weighted'`:
Calculate metrics for each label, and find their average, weighted by support (the number of true instances for each label).
`'samples'`:
Calculate metrics for each instance, and find their average.
Will be ignored when `y_true` is binary.
**pos\_label**int or str, default=1
The label of the positive class. Only applied to binary `y_true`. For multilabel-indicator `y_true`, `pos_label` is fixed to 1.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**average\_precision**float
Average precision score.
See also
[`roc_auc_score`](sklearn.metrics.roc_auc_score#sklearn.metrics.roc_auc_score "sklearn.metrics.roc_auc_score")
Compute the area under the ROC curve.
[`precision_recall_curve`](sklearn.metrics.precision_recall_curve#sklearn.metrics.precision_recall_curve "sklearn.metrics.precision_recall_curve")
Compute precision-recall pairs for different probability thresholds.
#### Notes
Changed in version 0.19: Instead of linearly interpolating between operating points, precisions are weighted by the change in recall since the last operating point.
#### References
[[1](#id1)] [Wikipedia entry for the Average precision](https://en.wikipedia.org/w/index.php?title=Information_retrieval&oldid=793358396#Average_precision)
#### Examples
```
>>> import numpy as np
>>> from sklearn.metrics import average_precision_score
>>> y_true = np.array([0, 0, 1, 1])
>>> y_scores = np.array([0.1, 0.4, 0.35, 0.8])
>>> average_precision_score(y_true, y_scores)
0.83...
```
Examples using `sklearn.metrics.average_precision_score`
--------------------------------------------------------
[Precision-Recall](../../auto_examples/model_selection/plot_precision_recall#sphx-glr-auto-examples-model-selection-plot-precision-recall-py)
scikit_learn sklearn.utils.gen_batches sklearn.utils.gen\_batches
==========================
sklearn.utils.gen\_batches(*n*, *batch\_size*, *\**, *min\_batch\_size=0*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/utils/__init__.py#L695)
Generator to create slices containing batch\_size elements, from 0 to n.
The last slice may contain less than batch\_size elements, when batch\_size does not divide n.
Parameters:
**n**int
**batch\_size**int
Number of element in each batch.
**min\_batch\_size**int, default=0
Minimum batch size to produce.
Yields:
slice of batch\_size elements
See also
[`gen_even_slices`](sklearn.utils.gen_even_slices#sklearn.utils.gen_even_slices "sklearn.utils.gen_even_slices")
Generator to create n\_packs slices going up to n.
#### Examples
```
>>> from sklearn.utils import gen_batches
>>> list(gen_batches(7, 3))
[slice(0, 3, None), slice(3, 6, None), slice(6, 7, None)]
>>> list(gen_batches(6, 3))
[slice(0, 3, None), slice(3, 6, None)]
>>> list(gen_batches(2, 3))
[slice(0, 2, None)]
>>> list(gen_batches(7, 3, min_batch_size=0))
[slice(0, 3, None), slice(3, 6, None), slice(6, 7, None)]
>>> list(gen_batches(7, 3, min_batch_size=2))
[slice(0, 3, None), slice(3, 7, None)]
```
scikit_learn sklearn.ensemble.GradientBoostingRegressor sklearn.ensemble.GradientBoostingRegressor
==========================================
*class*sklearn.ensemble.GradientBoostingRegressor(*\**, *loss='squared\_error'*, *learning\_rate=0.1*, *n\_estimators=100*, *subsample=1.0*, *criterion='friedman\_mse'*, *min\_samples\_split=2*, *min\_samples\_leaf=1*, *min\_weight\_fraction\_leaf=0.0*, *max\_depth=3*, *min\_impurity\_decrease=0.0*, *init=None*, *random\_state=None*, *max\_features=None*, *alpha=0.9*, *verbose=0*, *max\_leaf\_nodes=None*, *warm\_start=False*, *validation\_fraction=0.1*, *n\_iter\_no\_change=None*, *tol=0.0001*, *ccp\_alpha=0.0*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/ensemble/_gb.py#L1562)
Gradient Boosting for regression.
This estimator builds an additive model in a forward stage-wise fashion; it allows for the optimization of arbitrary differentiable loss functions. In each stage a regression tree is fit on the negative gradient of the given loss function.
[`sklearn.ensemble.HistGradientBoostingRegressor`](sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor") is a much faster variant of this algorithm for intermediate datasets (`n_samples >= 10_000`).
Read more in the [User Guide](../ensemble#gradient-boosting).
Parameters:
**loss**{‘squared\_error’, ‘absolute\_error’, ‘huber’, ‘quantile’}, default=’squared\_error’
Loss function to be optimized. ‘squared\_error’ refers to the squared error for regression. ‘absolute\_error’ refers to the absolute error of regression and is a robust loss function. ‘huber’ is a combination of the two. ‘quantile’ allows quantile regression (use `alpha` to specify the quantile).
Deprecated since version 1.0: The loss ‘ls’ was deprecated in v1.0 and will be removed in version 1.2. Use `loss='squared_error'` which is equivalent.
Deprecated since version 1.0: The loss ‘lad’ was deprecated in v1.0 and will be removed in version 1.2. Use `loss='absolute_error'` which is equivalent.
**learning\_rate**float, default=0.1
Learning rate shrinks the contribution of each tree by `learning_rate`. There is a trade-off between learning\_rate and n\_estimators. Values must be in the range `(0.0, inf)`.
**n\_estimators**int, default=100
The number of boosting stages to perform. Gradient boosting is fairly robust to over-fitting so a large number usually results in better performance. Values must be in the range `[1, inf)`.
**subsample**float, default=1.0
The fraction of samples to be used for fitting the individual base learners. If smaller than 1.0 this results in Stochastic Gradient Boosting. `subsample` interacts with the parameter `n_estimators`. Choosing `subsample < 1.0` leads to a reduction of variance and an increase in bias. Values must be in the range `(0.0, 1.0]`.
**criterion**{‘friedman\_mse’, ‘squared\_error’, ‘mse’}, default=’friedman\_mse’
The function to measure the quality of a split. Supported criteria are “friedman\_mse” for the mean squared error with improvement score by Friedman, “squared\_error” for mean squared error. The default value of “friedman\_mse” is generally the best as it can provide a better approximation in some cases.
New in version 0.18.
Deprecated since version 1.0: Criterion ‘mse’ was deprecated in v1.0 and will be removed in version 1.2. Use `criterion='squared_error'` which is equivalent.
**min\_samples\_split**int or float, default=2
The minimum number of samples required to split an internal node:
* If int, values must be in the range `[2, inf)`.
* If float, values must be in the range `(0.0, 1.0]` and `min_samples_split` will be `ceil(min_samples_split * n_samples)`.
Changed in version 0.18: Added float values for fractions.
**min\_samples\_leaf**int or float, default=1
The minimum number of samples required to be at a leaf node. A split point at any depth will only be considered if it leaves at least `min_samples_leaf` training samples in each of the left and right branches. This may have the effect of smoothing the model, especially in regression.
* If int, values must be in the range `[1, inf)`.
* If float, values must be in the range `(0.0, 1.0]` and `min_samples_leaf` will be `ceil(min_samples_leaf * n_samples)`.
Changed in version 0.18: Added float values for fractions.
**min\_weight\_fraction\_leaf**float, default=0.0
The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample\_weight is not provided. Values must be in the range `[0.0, 0.5]`.
**max\_depth**int, default=3
Maximum depth of the individual regression estimators. The maximum depth limits the number of nodes in the tree. Tune this parameter for best performance; the best value depends on the interaction of the input variables. Values must be in the range `[1, inf)`.
**min\_impurity\_decrease**float, default=0.0
A node will be split if this split induces a decrease of the impurity greater than or equal to this value. Values must be in the range `[0.0, inf)`.
The weighted impurity decrease equation is the following:
```
N_t / N * (impurity - N_t_R / N_t * right_impurity
- N_t_L / N_t * left_impurity)
```
where `N` is the total number of samples, `N_t` is the number of samples at the current node, `N_t_L` is the number of samples in the left child, and `N_t_R` is the number of samples in the right child.
`N`, `N_t`, `N_t_R` and `N_t_L` all refer to the weighted sum, if `sample_weight` is passed.
New in version 0.19.
**init**estimator or ‘zero’, default=None
An estimator object that is used to compute the initial predictions. `init` has to provide [fit](https://scikit-learn.org/1.1/glossary.html#term-fit) and [predict](https://scikit-learn.org/1.1/glossary.html#term-predict). If ‘zero’, the initial raw predictions are set to zero. By default a `DummyEstimator` is used, predicting either the average target value (for loss=’squared\_error’), or a quantile for the other losses.
**random\_state**int, RandomState instance or None, default=None
Controls the random seed given to each Tree estimator at each boosting iteration. In addition, it controls the random permutation of the features at each split (see Notes for more details). It also controls the random splitting of the training data to obtain a validation set if `n_iter_no_change` is not None. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**max\_features**{‘auto’, ‘sqrt’, ‘log2’}, int or float, default=None
The number of features to consider when looking for the best split:
* If int, values must be in the range `[1, inf)`.
* If float, values must be in the range `(0.0, 1.0]` and the features considered at each split will be `max(1, int(max_features * n_features_in_))`.
* If “auto”, then `max_features=n_features`.
* If “sqrt”, then `max_features=sqrt(n_features)`.
* If “log2”, then `max_features=log2(n_features)`.
* If None, then `max_features=n_features`.
Choosing `max_features < n_features` leads to a reduction of variance and an increase in bias.
Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than `max_features` features.
**alpha**float, default=0.9
The alpha-quantile of the huber loss function and the quantile loss function. Only if `loss='huber'` or `loss='quantile'`. Values must be in the range `(0.0, 1.0)`.
**verbose**int, default=0
Enable verbose output. If 1 then it prints progress and performance once in a while (the more trees the lower the frequency). If greater than 1 then it prints progress and performance for every tree. Values must be in the range `[0, inf)`.
**max\_leaf\_nodes**int, default=None
Grow trees with `max_leaf_nodes` in best-first fashion. Best nodes are defined as relative reduction in impurity. Values must be in the range `[2, inf)`. If None, then unlimited number of leaf nodes.
**warm\_start**bool, default=False
When set to `True`, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just erase the previous solution. See [the Glossary](https://scikit-learn.org/1.1/glossary.html#term-warm_start).
**validation\_fraction**float, default=0.1
The proportion of training data to set aside as validation set for early stopping. Values must be in the range `(0.0, 1.0)`. Only used if `n_iter_no_change` is set to an integer.
New in version 0.20.
**n\_iter\_no\_change**int, default=None
`n_iter_no_change` is used to decide if early stopping will be used to terminate training when validation score is not improving. By default it is set to None to disable early stopping. If set to a number, it will set aside `validation_fraction` size of the training data as validation and terminate training when validation score is not improving in all of the previous `n_iter_no_change` numbers of iterations. Values must be in the range `[1, inf)`.
New in version 0.20.
**tol**float, default=1e-4
Tolerance for the early stopping. When the loss is not improving by at least tol for `n_iter_no_change` iterations (if set to a number), the training stops. Values must be in the range `(0.0, inf)`.
New in version 0.20.
**ccp\_alpha**non-negative float, default=0.0
Complexity parameter used for Minimal Cost-Complexity Pruning. The subtree with the largest cost complexity that is smaller than `ccp_alpha` will be chosen. By default, no pruning is performed. Values must be in the range `[0.0, inf)`. See [Minimal Cost-Complexity Pruning](../tree#minimal-cost-complexity-pruning) for details.
New in version 0.22.
Attributes:
[`feature_importances_`](#sklearn.ensemble.GradientBoostingRegressor.feature_importances_ "sklearn.ensemble.GradientBoostingRegressor.feature_importances_")ndarray of shape (n\_features,)
The impurity-based feature importances.
**oob\_improvement\_**ndarray of shape (n\_estimators,)
The improvement in loss (= deviance) on the out-of-bag samples relative to the previous iteration. `oob_improvement_[0]` is the improvement in loss of the first stage over the `init` estimator. Only available if `subsample < 1.0`
**train\_score\_**ndarray of shape (n\_estimators,)
The i-th score `train_score_[i]` is the deviance (= loss) of the model at iteration `i` on the in-bag sample. If `subsample == 1` this is the deviance on the training data.
[`loss_`](#sklearn.ensemble.GradientBoostingRegressor.loss_ "sklearn.ensemble.GradientBoostingRegressor.loss_")LossFunction
DEPRECATED: Attribute `loss_` was deprecated in version 1.1 and will be removed in 1.3.
**init\_**estimator
The estimator that provides the initial predictions. Set via the `init` argument or `loss.init_estimator`.
**estimators\_**ndarray of DecisionTreeRegressor of shape (n\_estimators, 1)
The collection of fitted sub-estimators.
**n\_estimators\_**int
The number of estimators as selected by early stopping (if `n_iter_no_change` is specified). Otherwise it is set to `n_estimators`.
[`n_features_`](#sklearn.ensemble.GradientBoostingRegressor.n_features_ "sklearn.ensemble.GradientBoostingRegressor.n_features_")int
DEPRECATED: Attribute `n_features_` was deprecated in version 1.0 and will be removed in 1.2.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
**max\_features\_**int
The inferred value of max\_features.
See also
[`HistGradientBoostingRegressor`](sklearn.ensemble.histgradientboostingregressor#sklearn.ensemble.HistGradientBoostingRegressor "sklearn.ensemble.HistGradientBoostingRegressor")
Histogram-based Gradient Boosting Classification Tree.
[`sklearn.tree.DecisionTreeRegressor`](sklearn.tree.decisiontreeregressor#sklearn.tree.DecisionTreeRegressor "sklearn.tree.DecisionTreeRegressor")
A decision tree regressor.
[`sklearn.ensemble.RandomForestRegressor`](sklearn.ensemble.randomforestregressor#sklearn.ensemble.RandomForestRegressor "sklearn.ensemble.RandomForestRegressor")
A random forest regressor.
#### Notes
The features are always randomly permuted at each split. Therefore, the best found split may vary, even with the same training data and `max_features=n_features`, if the improvement of the criterion is identical for several splits enumerated during the search of the best split. To obtain a deterministic behaviour during fitting, `random_state` has to be fixed.
#### References
J. Friedman, Greedy Function Approximation: A Gradient Boosting Machine, The Annals of Statistics, Vol. 29, No. 5, 2001.
10. Friedman, Stochastic Gradient Boosting, 1999
T. Hastie, R. Tibshirani and J. Friedman. Elements of Statistical Learning Ed. 2, Springer, 2009.
#### Examples
```
>>> from sklearn.datasets import make_regression
>>> from sklearn.ensemble import GradientBoostingRegressor
>>> from sklearn.model_selection import train_test_split
>>> X, y = make_regression(random_state=0)
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, random_state=0)
>>> reg = GradientBoostingRegressor(random_state=0)
>>> reg.fit(X_train, y_train)
GradientBoostingRegressor(random_state=0)
>>> reg.predict(X_test[1:2])
array([-61...])
>>> reg.score(X_test, y_test)
0.4...
```
#### Methods
| | |
| --- | --- |
| [`apply`](#sklearn.ensemble.GradientBoostingRegressor.apply "sklearn.ensemble.GradientBoostingRegressor.apply")(X) | Apply trees in the ensemble to X, return leaf indices. |
| [`fit`](#sklearn.ensemble.GradientBoostingRegressor.fit "sklearn.ensemble.GradientBoostingRegressor.fit")(X, y[, sample\_weight, monitor]) | Fit the gradient boosting model. |
| [`get_params`](#sklearn.ensemble.GradientBoostingRegressor.get_params "sklearn.ensemble.GradientBoostingRegressor.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.ensemble.GradientBoostingRegressor.predict "sklearn.ensemble.GradientBoostingRegressor.predict")(X) | Predict regression target for X. |
| [`score`](#sklearn.ensemble.GradientBoostingRegressor.score "sklearn.ensemble.GradientBoostingRegressor.score")(X, y[, sample\_weight]) | Return the coefficient of determination of the prediction. |
| [`set_params`](#sklearn.ensemble.GradientBoostingRegressor.set_params "sklearn.ensemble.GradientBoostingRegressor.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`staged_predict`](#sklearn.ensemble.GradientBoostingRegressor.staged_predict "sklearn.ensemble.GradientBoostingRegressor.staged_predict")(X) | Predict regression target at each stage for X. |
apply(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/ensemble/_gb.py#L1990)
Apply trees in the ensemble to X, return leaf indices.
New in version 0.17.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The input samples. Internally, its dtype will be converted to `dtype=np.float32`. If a sparse matrix is provided, it will be converted to a sparse `csr_matrix`.
Returns:
**X\_leaves**array-like of shape (n\_samples, n\_estimators)
For each datapoint x in X and for each tree in the ensemble, return the index of the leaf x ends up in each estimator.
*property*feature\_importances\_
The impurity-based feature importances.
The higher, the more important the feature. The importance of a feature is computed as the (normalized) total reduction of the criterion brought by that feature. It is also known as the Gini importance.
Warning: impurity-based feature importances can be misleading for high cardinality features (many unique values). See [`sklearn.inspection.permutation_importance`](sklearn.inspection.permutation_importance#sklearn.inspection.permutation_importance "sklearn.inspection.permutation_importance") as an alternative.
Returns:
**feature\_importances\_**ndarray of shape (n\_features,)
The values of this array sum to 1, unless all trees are single node trees consisting of only the root node, in which case it will be an array of zeros.
fit(*X*, *y*, *sample\_weight=None*, *monitor=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/ensemble/_gb.py#L495)
Fit the gradient boosting model.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The input samples. Internally, it will be converted to `dtype=np.float32` and if a sparse matrix is provided to a sparse `csr_matrix`.
**y**array-like of shape (n\_samples,)
Target values (strings or integers in classification, real numbers in regression) For classification, labels must correspond to classes.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights. If None, then samples are equally weighted. Splits that would create child nodes with net zero or negative weight are ignored while searching for a split in each node. In the case of classification, splits are also ignored if they would result in any single class carrying a negative weight in either child node.
**monitor**callable, default=None
The monitor is called after each iteration with the current iteration, a reference to the estimator and the local variables of `_fit_stages` as keyword arguments `callable(i, self,
locals())`. If the callable returns `True` the fitting procedure is stopped. The monitor can be used for various things such as computing held-out estimates, early stopping, model introspect, and snapshoting.
Returns:
**self**object
Fitted estimator.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
*property*loss\_
DEPRECATED: Attribute `loss_` was deprecated in version 1.1 and will be removed in 1.3.
*property*n\_features\_
DEPRECATED: Attribute `n_features_` was deprecated in version 1.0 and will be removed in 1.2. Use `n_features_in_` instead.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/ensemble/_gb.py#L1948)
Predict regression target for X.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The input samples. Internally, it will be converted to `dtype=np.float32` and if a sparse matrix is provided to a sparse `csr_matrix`.
Returns:
**y**ndarray of shape (n\_samples,)
The predicted values.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L677)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares `((y_true - y_pred)** 2).sum()` and \(v\) is the total sum of squares `((y_true - y_true.mean()) ** 2).sum()`. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of `y`, disregarding the input features, would get a \(R^2\) score of 0.0.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape `(n_samples, n_samples_fitted)`, where `n_samples_fitted` is the number of samples used in the fitting for the estimator.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True values for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
\(R^2\) of `self.predict(X)` wrt. `y`.
#### Notes
The \(R^2\) score used when calling `score` on a regressor uses `multioutput='uniform_average'` from version 0.23 to keep consistent with default value of [`r2_score`](sklearn.metrics.r2_score#sklearn.metrics.r2_score "sklearn.metrics.r2_score"). This influences the `score` method of all the multioutput regressors (except for [`MultiOutputRegressor`](sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor")).
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
staged\_predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/ensemble/_gb.py#L1969)
Predict regression target at each stage for X.
This method allows monitoring (i.e. determine error on testing set) after each stage.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The input samples. Internally, it will be converted to `dtype=np.float32` and if a sparse matrix is provided to a sparse `csr_matrix`.
Yields:
**y**generator of ndarray of shape (n\_samples,)
The predicted value of the input samples.
Examples using `sklearn.ensemble.GradientBoostingRegressor`
-----------------------------------------------------------
[Gradient Boosting regression](../../auto_examples/ensemble/plot_gradient_boosting_regression#sphx-glr-auto-examples-ensemble-plot-gradient-boosting-regression-py)
[Plot individual and voting regression predictions](../../auto_examples/ensemble/plot_voting_regressor#sphx-glr-auto-examples-ensemble-plot-voting-regressor-py)
[Prediction Intervals for Gradient Boosting Regression](../../auto_examples/ensemble/plot_gradient_boosting_quantile#sphx-glr-auto-examples-ensemble-plot-gradient-boosting-quantile-py)
[Model Complexity Influence](../../auto_examples/applications/plot_model_complexity_influence#sphx-glr-auto-examples-applications-plot-model-complexity-influence-py)
| programming_docs |
scikit_learn sklearn.cluster.OPTICS sklearn.cluster.OPTICS
======================
*class*sklearn.cluster.OPTICS(*\**, *min\_samples=5*, *max\_eps=inf*, *metric='minkowski'*, *p=2*, *metric\_params=None*, *cluster\_method='xi'*, *eps=None*, *xi=0.05*, *predecessor\_correction=True*, *min\_cluster\_size=None*, *algorithm='auto'*, *leaf\_size=30*, *memory=None*, *n\_jobs=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_optics.py#L25)
Estimate clustering structure from vector array.
OPTICS (Ordering Points To Identify the Clustering Structure), closely related to DBSCAN, finds core sample of high density and expands clusters from them [[1]](#r2c55e37003fe-1). Unlike DBSCAN, keeps cluster hierarchy for a variable neighborhood radius. Better suited for usage on large datasets than the current sklearn implementation of DBSCAN.
Clusters are then extracted using a DBSCAN-like method (cluster\_method = ‘dbscan’) or an automatic technique proposed in [[1]](#r2c55e37003fe-1) (cluster\_method = ‘xi’).
This implementation deviates from the original OPTICS by first performing k-nearest-neighborhood searches on all points to identify core sizes, then computing only the distances to unprocessed points when constructing the cluster order. Note that we do not employ a heap to manage the expansion candidates, so the time complexity will be O(n^2).
Read more in the [User Guide](../clustering#optics).
Parameters:
**min\_samples**int > 1 or float between 0 and 1, default=5
The number of samples in a neighborhood for a point to be considered as a core point. Also, up and down steep regions can’t have more than `min_samples` consecutive non-steep points. Expressed as an absolute number or a fraction of the number of samples (rounded to be at least 2).
**max\_eps**float, default=np.inf
The maximum distance between two samples for one to be considered as in the neighborhood of the other. Default value of `np.inf` will identify clusters across all scales; reducing `max_eps` will result in shorter run times.
**metric**str or callable, default=’minkowski’
Metric to use for distance computation. Any metric from scikit-learn or scipy.spatial.distance can be used.
If metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays as input and return one value indicating the distance between them. This works for Scipy’s metrics, but is less efficient than passing the metric name as a string. If metric is “precomputed”, `X` is assumed to be a distance matrix and must be square.
Valid values for metric are:
* from scikit-learn: [‘cityblock’, ‘cosine’, ‘euclidean’, ‘l1’, ‘l2’, ‘manhattan’]
* from scipy.spatial.distance: [‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘correlation’, ‘dice’, ‘hamming’, ‘jaccard’, ‘kulsinski’, ‘mahalanobis’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘yule’]
See the documentation for scipy.spatial.distance for details on these metrics.
**p**int, default=2
Parameter for the Minkowski metric from [`pairwise_distances`](sklearn.metrics.pairwise_distances#sklearn.metrics.pairwise_distances "sklearn.metrics.pairwise_distances"). When p = 1, this is equivalent to using manhattan\_distance (l1), and euclidean\_distance (l2) for p = 2. For arbitrary p, minkowski\_distance (l\_p) is used.
**metric\_params**dict, default=None
Additional keyword arguments for the metric function.
**cluster\_method**str, default=’xi’
The extraction method used to extract clusters using the calculated reachability and ordering. Possible values are “xi” and “dbscan”.
**eps**float, default=None
The maximum distance between two samples for one to be considered as in the neighborhood of the other. By default it assumes the same value as `max_eps`. Used only when `cluster_method='dbscan'`.
**xi**float between 0 and 1, default=0.05
Determines the minimum steepness on the reachability plot that constitutes a cluster boundary. For example, an upwards point in the reachability plot is defined by the ratio from one point to its successor being at most 1-xi. Used only when `cluster_method='xi'`.
**predecessor\_correction**bool, default=True
Correct clusters according to the predecessors calculated by OPTICS [[2]](#r2c55e37003fe-2). This parameter has minimal effect on most datasets. Used only when `cluster_method='xi'`.
**min\_cluster\_size**int > 1 or float between 0 and 1, default=None
Minimum number of samples in an OPTICS cluster, expressed as an absolute number or a fraction of the number of samples (rounded to be at least 2). If `None`, the value of `min_samples` is used instead. Used only when `cluster_method='xi'`.
**algorithm**{‘auto’, ‘ball\_tree’, ‘kd\_tree’, ‘brute’}, default=’auto’
Algorithm used to compute the nearest neighbors:
* ‘ball\_tree’ will use `BallTree`.
* ‘kd\_tree’ will use `KDTree`.
* ‘brute’ will use a brute-force search.
* ‘auto’ (default) will attempt to decide the most appropriate algorithm based on the values passed to [`fit`](#sklearn.cluster.OPTICS.fit "sklearn.cluster.OPTICS.fit") method.
Note: fitting on sparse input will override the setting of this parameter, using brute force.
**leaf\_size**int, default=30
Leaf size passed to `BallTree` or `KDTree`. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem.
**memory**str or object with the joblib.Memory interface, default=None
Used to cache the output of the computation of the tree. By default, no caching is done. If a string is given, it is the path to the caching directory.
**n\_jobs**int, default=None
The number of parallel jobs to run for neighbors search. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
Attributes:
**labels\_**ndarray of shape (n\_samples,)
Cluster labels for each point in the dataset given to fit(). Noisy samples and points which are not included in a leaf cluster of `cluster_hierarchy_` are labeled as -1.
**reachability\_**ndarray of shape (n\_samples,)
Reachability distances per sample, indexed by object order. Use `clust.reachability_[clust.ordering_]` to access in cluster order.
**ordering\_**ndarray of shape (n\_samples,)
The cluster ordered list of sample indices.
**core\_distances\_**ndarray of shape (n\_samples,)
Distance at which each sample becomes a core point, indexed by object order. Points which will never be core have a distance of inf. Use `clust.core_distances_[clust.ordering_]` to access in cluster order.
**predecessor\_**ndarray of shape (n\_samples,)
Point that a sample was reached from, indexed by object order. Seed points have a predecessor of -1.
**cluster\_hierarchy\_**ndarray of shape (n\_clusters, 2)
The list of clusters in the form of `[start, end]` in each row, with all indices inclusive. The clusters are ordered according to `(end, -start)` (ascending) so that larger clusters encompassing smaller clusters come after those smaller ones. Since `labels_` does not reflect the hierarchy, usually `len(cluster_hierarchy_) > np.unique(optics.labels_)`. Please also note that these indices are of the `ordering_`, i.e. `X[ordering_][start:end + 1]` form a cluster. Only available when `cluster_method='xi'`.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`DBSCAN`](sklearn.cluster.dbscan#sklearn.cluster.DBSCAN "sklearn.cluster.DBSCAN")
A similar clustering for a specified neighborhood radius (eps). Our implementation is optimized for runtime.
#### References
[1] ([1](#id1),[2](#id2)) Ankerst, Mihael, Markus M. Breunig, Hans-Peter Kriegel, and Jörg Sander. “OPTICS: ordering points to identify the clustering structure.” ACM SIGMOD Record 28, no. 2 (1999): 49-60.
[[2](#id3)] Schubert, Erich, Michael Gertz. “Improving the Cluster Structure Extracted from OPTICS Plots.” Proc. of the Conference “Lernen, Wissen, Daten, Analysen” (LWDA) (2018): 318-329.
#### Examples
```
>>> from sklearn.cluster import OPTICS
>>> import numpy as np
>>> X = np.array([[1, 2], [2, 5], [3, 6],
... [8, 7], [8, 8], [7, 3]])
>>> clustering = OPTICS(min_samples=2).fit(X)
>>> clustering.labels_
array([0, 0, 0, 1, 1, 1])
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.cluster.OPTICS.fit "sklearn.cluster.OPTICS.fit")(X[, y]) | Perform OPTICS clustering. |
| [`fit_predict`](#sklearn.cluster.OPTICS.fit_predict "sklearn.cluster.OPTICS.fit_predict")(X[, y]) | Perform clustering on `X` and returns cluster labels. |
| [`get_params`](#sklearn.cluster.OPTICS.get_params "sklearn.cluster.OPTICS.get_params")([deep]) | Get parameters for this estimator. |
| [`set_params`](#sklearn.cluster.OPTICS.set_params "sklearn.cluster.OPTICS.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_optics.py#L257)
Perform OPTICS clustering.
Extracts an ordered list of points and reachability distances, and performs initial clustering using `max_eps` distance specified at OPTICS object instantiation.
Parameters:
**X**ndarray of shape (n\_samples, n\_features), or (n\_samples, n\_samples) if metric=’precomputed’
A feature array, or array of distances between samples if metric=’precomputed’.
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**self**object
Returns a fitted instance of self.
fit\_predict(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L732)
Perform clustering on `X` and returns cluster labels.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input data.
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**labels**ndarray of shape (n\_samples,), dtype=np.int64
Cluster labels.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.cluster.OPTICS`
---------------------------------------
[Comparing different clustering algorithms on toy datasets](../../auto_examples/cluster/plot_cluster_comparison#sphx-glr-auto-examples-cluster-plot-cluster-comparison-py)
[Demo of OPTICS clustering algorithm](../../auto_examples/cluster/plot_optics#sphx-glr-auto-examples-cluster-plot-optics-py)
scikit_learn sklearn.utils.extmath.fast_logdet sklearn.utils.extmath.fast\_logdet
==================================
sklearn.utils.extmath.fast\_logdet(*A*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/utils/extmath.py#L82)
Compute log(det(A)) for A symmetric.
Equivalent to : np.log(nl.det(A)) but more robust. It returns -Inf if det(A) is non positive or is not defined.
Parameters:
**A**array-like
The matrix.
scikit_learn sklearn.random_projection.SparseRandomProjection sklearn.random\_projection.SparseRandomProjection
=================================================
*class*sklearn.random\_projection.SparseRandomProjection(*n\_components='auto'*, *\**, *density='auto'*, *eps=0.1*, *dense\_output=False*, *compute\_inverse\_components=False*, *random\_state=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/random_projection.py#L594)
Reduce dimensionality through sparse random projection.
Sparse random matrix is an alternative to dense random projection matrix that guarantees similar embedding quality while being much more memory efficient and allowing faster computation of the projected data.
If we note `s = 1 / density` the components of the random matrix are drawn from:
* -sqrt(s) / sqrt(n\_components) with probability 1 / 2s
* 0 with probability 1 - 1 / s
* +sqrt(s) / sqrt(n\_components) with probability 1 / 2s
Read more in the [User Guide](../random_projection#sparse-random-matrix).
New in version 0.13.
Parameters:
**n\_components**int or ‘auto’, default=’auto’
Dimensionality of the target projection space.
n\_components can be automatically adjusted according to the number of samples in the dataset and the bound given by the Johnson-Lindenstrauss lemma. In that case the quality of the embedding is controlled by the `eps` parameter.
It should be noted that Johnson-Lindenstrauss lemma can yield very conservative estimated of the required number of components as it makes no assumption on the structure of the dataset.
**density**float or ‘auto’, default=’auto’
Ratio in the range (0, 1] of non-zero component in the random projection matrix.
If density = ‘auto’, the value is set to the minimum density as recommended by Ping Li et al.: 1 / sqrt(n\_features).
Use density = 1 / 3.0 if you want to reproduce the results from Achlioptas, 2001.
**eps**float, default=0.1
Parameter to control the quality of the embedding according to the Johnson-Lindenstrauss lemma when n\_components is set to ‘auto’. This value should be strictly positive.
Smaller values lead to better embedding and higher number of dimensions (n\_components) in the target projection space.
**dense\_output**bool, default=False
If True, ensure that the output of the random projection is a dense numpy array even if the input and random projection matrix are both sparse. In practice, if the number of components is small the number of zero components in the projected data will be very small and it will be more CPU and memory efficient to use a dense representation.
If False, the projected data uses a sparse representation if the input is sparse.
**compute\_inverse\_components**bool, default=False
Learn the inverse transform by computing the pseudo-inverse of the components during fit. Note that the pseudo-inverse is always a dense array, even if the training data was sparse. This means that it might be necessary to call `inverse_transform` on a small batch of samples at a time to avoid exhausting the available memory on the host. Moreover, computing the pseudo-inverse does not scale well to large matrices.
**random\_state**int, RandomState instance or None, default=None
Controls the pseudo random number generator used to generate the projection matrix at fit time. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
Attributes:
**n\_components\_**int
Concrete number of components computed when n\_components=”auto”.
**components\_**sparse matrix of shape (n\_components, n\_features)
Random matrix used for the projection. Sparse matrix will be of CSR format.
**inverse\_components\_**ndarray of shape (n\_features, n\_components)
Pseudo-inverse of the components, only computed if `compute_inverse_components` is True.
New in version 1.1.
**density\_**float in range 0.0 - 1.0
Concrete density computed from when density = “auto”.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`GaussianRandomProjection`](sklearn.random_projection.gaussianrandomprojection#sklearn.random_projection.GaussianRandomProjection "sklearn.random_projection.GaussianRandomProjection")
Reduce dimensionality through Gaussian random projection.
#### References
[1] Ping Li, T. Hastie and K. W. Church, 2006, “Very Sparse Random Projections”. <https://web.stanford.edu/~hastie/Papers/Ping/KDD06_rp.pdf>
[2] D. Achlioptas, 2001, “Database-friendly random projections”, <https://cgi.di.uoa.gr/~optas/papers/jl.pdf>
#### Examples
```
>>> import numpy as np
>>> from sklearn.random_projection import SparseRandomProjection
>>> rng = np.random.RandomState(42)
>>> X = rng.rand(25, 3000)
>>> transformer = SparseRandomProjection(random_state=rng)
>>> X_new = transformer.fit_transform(X)
>>> X_new.shape
(25, 2759)
>>> # very few components are non-zero
>>> np.mean(transformer.components_ != 0)
0.0182...
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.random_projection.SparseRandomProjection.fit "sklearn.random_projection.SparseRandomProjection.fit")(X[, y]) | Generate a sparse random projection matrix. |
| [`fit_transform`](#sklearn.random_projection.SparseRandomProjection.fit_transform "sklearn.random_projection.SparseRandomProjection.fit_transform")(X[, y]) | Fit to data, then transform it. |
| [`get_feature_names_out`](#sklearn.random_projection.SparseRandomProjection.get_feature_names_out "sklearn.random_projection.SparseRandomProjection.get_feature_names_out")([input\_features]) | Get output feature names for transformation. |
| [`get_params`](#sklearn.random_projection.SparseRandomProjection.get_params "sklearn.random_projection.SparseRandomProjection.get_params")([deep]) | Get parameters for this estimator. |
| [`inverse_transform`](#sklearn.random_projection.SparseRandomProjection.inverse_transform "sklearn.random_projection.SparseRandomProjection.inverse_transform")(X) | Project data back to its original space. |
| [`set_params`](#sklearn.random_projection.SparseRandomProjection.set_params "sklearn.random_projection.SparseRandomProjection.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.random_projection.SparseRandomProjection.transform "sklearn.random_projection.SparseRandomProjection.transform")(X) | Project the data by using matrix product with the random matrix. |
fit(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/random_projection.py#L341)
Generate a sparse random projection matrix.
Parameters:
**X**{ndarray, sparse matrix} of shape (n\_samples, n\_features)
Training set: only the shape is used to find optimal random matrix dimensions based on the theory referenced in the afore mentioned papers.
**y**Ignored
Not used, present here for API consistency by convention.
Returns:
**self**object
BaseRandomProjection class instance.
fit\_transform(*X*, *y=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L839)
Fit to data, then transform it.
Fits transformer to `X` and `y` with optional parameters `fit_params` and returns a transformed version of `X`.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs), default=None
Target values (None for unsupervised transformations).
**\*\*fit\_params**dict
Additional fit parameters.
Returns:
**X\_new**ndarray array of shape (n\_samples, n\_features\_new)
Transformed array.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L909)
Get output feature names for transformation.
Parameters:
**input\_features**array-like of str or None, default=None
Only used to validate feature names with the names seen in [`fit`](#sklearn.random_projection.SparseRandomProjection.fit "sklearn.random_projection.SparseRandomProjection.fit").
Returns:
**feature\_names\_out**ndarray of str objects
Transformed feature names.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
inverse\_transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/random_projection.py#L418)
Project data back to its original space.
Returns an array X\_original whose transform would be X. Note that even if X is sparse, X\_original is dense: this may use a lot of RAM.
If `compute_inverse_components` is False, the inverse of the components is computed during each call to `inverse_transform` which can be costly.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_components)
Data to be transformed back.
Returns:
**X\_original**ndarray of shape (n\_samples, n\_features)
Reconstructed data.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/random_projection.py#L772)
Project the data by using matrix product with the random matrix.
Parameters:
**X**{ndarray, sparse matrix} of shape (n\_samples, n\_features)
The input data to project into a smaller dimensional space.
Returns:
**X\_new**{ndarray, sparse matrix} of shape (n\_samples, n\_components)
Projected array. It is a sparse matrix only when the input is sparse and `dense_output = False`.
Examples using `sklearn.random_projection.SparseRandomProjection`
-----------------------------------------------------------------
[Manifold learning on handwritten digits: Locally Linear Embedding, Isomap…](../../auto_examples/manifold/plot_lle_digits#sphx-glr-auto-examples-manifold-plot-lle-digits-py)
[The Johnson-Lindenstrauss bound for embedding with random projections](../../auto_examples/miscellaneous/plot_johnson_lindenstrauss_bound#sphx-glr-auto-examples-miscellaneous-plot-johnson-lindenstrauss-bound-py)
| programming_docs |
scikit_learn sklearn.decomposition.non_negative_factorization sklearn.decomposition.non\_negative\_factorization
==================================================
sklearn.decomposition.non\_negative\_factorization(*X*, *W=None*, *H=None*, *n\_components=None*, *\**, *init=None*, *update\_H=True*, *solver='cd'*, *beta\_loss='frobenius'*, *tol=0.0001*, *max\_iter=200*, *alpha='deprecated'*, *alpha\_W=0.0*, *alpha\_H='same'*, *l1\_ratio=0.0*, *regularization='deprecated'*, *random\_state=None*, *verbose=0*, *shuffle=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/decomposition/_nmf.py#L910)
Compute Non-negative Matrix Factorization (NMF).
Find two non-negative matrices (W, H) whose product approximates the non- negative matrix X. This factorization can be used for example for dimensionality reduction, source separation or topic extraction.
The objective function is:
\[ \begin{align}\begin{aligned}L(W, H) &= 0.5 \* ||X - WH||\_{loss}^2\\&+ alpha\\_W \* l1\\_ratio \* n\\_features \* ||vec(W)||\_1\\&+ alpha\\_H \* l1\\_ratio \* n\\_samples \* ||vec(H)||\_1\\&+ 0.5 \* alpha\\_W \* (1 - l1\\_ratio) \* n\\_features \* ||W||\_{Fro}^2\\&+ 0.5 \* alpha\\_H \* (1 - l1\\_ratio) \* n\\_samples \* ||H||\_{Fro}^2\end{aligned}\end{align} \] Where:
\(||A||\_{Fro}^2 = \sum\_{i,j} A\_{ij}^2\) (Frobenius norm)
\(||vec(A)||\_1 = \sum\_{i,j} abs(A\_{ij})\) (Elementwise L1 norm)
The generic norm \(||X - WH||\_{loss}^2\) may represent the Frobenius norm or another supported beta-divergence loss. The choice between options is controlled by the `beta_loss` parameter.
The regularization terms are scaled by `n_features` for `W` and by `n_samples` for `H` to keep their impact balanced with respect to one another and to the data fit term as independent as possible of the size `n_samples` of the training set.
The objective function is minimized with an alternating minimization of W and H. If H is given and update\_H=False, it solves for W only.
Note that the transformed data is named W and the components matrix is named H. In the NMF literature, the naming convention is usually the opposite since the data matrix X is transposed.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Constant matrix.
**W**array-like of shape (n\_samples, n\_components), default=None
If init=’custom’, it is used as initial guess for the solution.
**H**array-like of shape (n\_components, n\_features), default=None
If init=’custom’, it is used as initial guess for the solution. If update\_H=False, it is used as a constant, to solve for W only.
**n\_components**int, default=None
Number of components, if n\_components is not set all features are kept.
**init**{‘random’, ‘nndsvd’, ‘nndsvda’, ‘nndsvdar’, ‘custom’}, default=None
Method used to initialize the procedure.
Valid options:
* None: ‘nndsvda’ if n\_components < n\_features, otherwise ‘random’.
* ‘random’: non-negative random matrices, scaled with:
sqrt(X.mean() / n\_components)
* ‘nndsvd’: Nonnegative Double Singular Value Decomposition (NNDSVD)
initialization (better for sparseness)
* ‘nndsvda’: NNDSVD with zeros filled with the average of X
(better when sparsity is not desired)
* ‘nndsvdar’: NNDSVD with zeros filled with small random values
(generally faster, less accurate alternative to NNDSVDa for when sparsity is not desired)
* ‘custom’: use custom matrices W and H if `update_H=True`. If `update_H=False`, then only custom matrix H is used.
Changed in version 0.23: The default value of `init` changed from ‘random’ to None in 0.23.
Changed in version 1.1: When `init=None` and n\_components is less than n\_samples and n\_features defaults to `nndsvda` instead of `nndsvd`.
**update\_H**bool, default=True
Set to True, both W and H will be estimated from initial guesses. Set to False, only W will be estimated.
**solver**{‘cd’, ‘mu’}, default=’cd’
Numerical solver to use:
* ‘cd’ is a Coordinate Descent solver that uses Fast Hierarchical
Alternating Least Squares (Fast HALS).
* ‘mu’ is a Multiplicative Update solver.
New in version 0.17: Coordinate Descent solver.
New in version 0.19: Multiplicative Update solver.
**beta\_loss**float or {‘frobenius’, ‘kullback-leibler’, ‘itakura-saito’}, default=’frobenius’
Beta divergence to be minimized, measuring the distance between X and the dot product WH. Note that values different from ‘frobenius’ (or 2) and ‘kullback-leibler’ (or 1) lead to significantly slower fits. Note that for beta\_loss <= 0 (or ‘itakura-saito’), the input matrix X cannot contain zeros. Used only in ‘mu’ solver.
New in version 0.19.
**tol**float, default=1e-4
Tolerance of the stopping condition.
**max\_iter**int, default=200
Maximum number of iterations before timing out.
**alpha**float, default=0.0
Constant that multiplies the regularization terms. Set it to zero to have no regularization. When using `alpha` instead of `alpha_W` and `alpha_H`, the regularization terms are not scaled by the `n_features` (resp. `n_samples`) factors for `W` (resp. `H`).
Deprecated since version 1.0: The `alpha` parameter is deprecated in 1.0 and will be removed in 1.2. Use `alpha_W` and `alpha_H` instead.
**alpha\_W**float, default=0.0
Constant that multiplies the regularization terms of `W`. Set it to zero (default) to have no regularization on `W`.
New in version 1.0.
**alpha\_H**float or “same”, default=”same”
Constant that multiplies the regularization terms of `H`. Set it to zero to have no regularization on `H`. If “same” (default), it takes the same value as `alpha_W`.
New in version 1.0.
**l1\_ratio**float, default=0.0
The regularization mixing parameter, with 0 <= l1\_ratio <= 1. For l1\_ratio = 0 the penalty is an elementwise L2 penalty (aka Frobenius Norm). For l1\_ratio = 1 it is an elementwise L1 penalty. For 0 < l1\_ratio < 1, the penalty is a combination of L1 and L2.
**regularization**{‘both’, ‘components’, ‘transformation’}, default=None
Select whether the regularization affects the components (H), the transformation (W), both or none of them.
Deprecated since version 1.0: The `regularization` parameter is deprecated in 1.0 and will be removed in 1.2. Use `alpha_W` and `alpha_H` instead.
**random\_state**int, RandomState instance or None, default=None
Used for NMF initialisation (when `init` == ‘nndsvdar’ or ‘random’), and in Coordinate Descent. Pass an int for reproducible results across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**verbose**int, default=0
The verbosity level.
**shuffle**bool, default=False
If true, randomize the order of coordinates in the CD solver.
Returns:
**W**ndarray of shape (n\_samples, n\_components)
Solution to the non-negative least squares problem.
**H**ndarray of shape (n\_components, n\_features)
Solution to the non-negative least squares problem.
**n\_iter**int
Actual number of iterations.
#### References
[1] [“Fast local algorithms for large scale nonnegative matrix and tensor factorizations”](https://doi.org/10.1587/transfun.E92.A.708) Cichocki, Andrzej, and P. H. A. N. Anh-Huy. IEICE transactions on fundamentals of electronics, communications and computer sciences 92.3: 708-721, 2009.
[2] [“Algorithms for nonnegative matrix factorization with the beta-divergence”](https://doi.org/10.1162/NECO_a_00168) Fevotte, C., & Idier, J. (2011). Neural Computation, 23(9).
#### Examples
```
>>> import numpy as np
>>> X = np.array([[1,1], [2, 1], [3, 1.2], [4, 1], [5, 0.8], [6, 1]])
>>> from sklearn.decomposition import non_negative_factorization
>>> W, H, n_iter = non_negative_factorization(X, n_components=2,
... init='random', random_state=0)
```
scikit_learn sklearn.tree.ExtraTreeRegressor sklearn.tree.ExtraTreeRegressor
===============================
*class*sklearn.tree.ExtraTreeRegressor(*\**, *criterion='squared\_error'*, *splitter='random'*, *max\_depth=None*, *min\_samples\_split=2*, *min\_samples\_leaf=1*, *min\_weight\_fraction\_leaf=0.0*, *max\_features=1.0*, *random\_state=None*, *min\_impurity\_decrease=0.0*, *max\_leaf\_nodes=None*, *ccp\_alpha=0.0*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/tree/_classes.py#L1651)
An extremely randomized tree regressor.
Extra-trees differ from classic decision trees in the way they are built. When looking for the best split to separate the samples of a node into two groups, random splits are drawn for each of the `max_features` randomly selected features and the best split among those is chosen. When `max_features` is set 1, this amounts to building a totally random decision tree.
Warning: Extra-trees should only be used within ensemble methods.
Read more in the [User Guide](../tree#tree).
Parameters:
**criterion**{“squared\_error”, “friedman\_mse”}, default=”squared\_error”
The function to measure the quality of a split. Supported criteria are “squared\_error” for the mean squared error, which is equal to variance reduction as feature selection criterion and “mae” for the mean absolute error.
New in version 0.18: Mean Absolute Error (MAE) criterion.
New in version 0.24: Poisson deviance criterion.
Deprecated since version 1.0: Criterion “mse” was deprecated in v1.0 and will be removed in version 1.2. Use `criterion="squared_error"` which is equivalent.
Deprecated since version 1.0: Criterion “mae” was deprecated in v1.0 and will be removed in version 1.2. Use `criterion="absolute_error"` which is equivalent.
**splitter**{“random”, “best”}, default=”random”
The strategy used to choose the split at each node. Supported strategies are “best” to choose the best split and “random” to choose the best random split.
**max\_depth**int, default=None
The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min\_samples\_split samples.
**min\_samples\_split**int or float, default=2
The minimum number of samples required to split an internal node:
* If int, then consider `min_samples_split` as the minimum number.
* If float, then `min_samples_split` is a fraction and `ceil(min_samples_split * n_samples)` are the minimum number of samples for each split.
Changed in version 0.18: Added float values for fractions.
**min\_samples\_leaf**int or float, default=1
The minimum number of samples required to be at a leaf node. A split point at any depth will only be considered if it leaves at least `min_samples_leaf` training samples in each of the left and right branches. This may have the effect of smoothing the model, especially in regression.
* If int, then consider `min_samples_leaf` as the minimum number.
* If float, then `min_samples_leaf` is a fraction and `ceil(min_samples_leaf * n_samples)` are the minimum number of samples for each node.
Changed in version 0.18: Added float values for fractions.
**min\_weight\_fraction\_leaf**float, default=0.0
The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample\_weight is not provided.
**max\_features**int, float, {“auto”, “sqrt”, “log2”} or None, default=1.0
The number of features to consider when looking for the best split:
* If int, then consider `max_features` features at each split.
* If float, then `max_features` is a fraction and `max(1, int(max_features * n_features_in_))` features are considered at each split.
* If “auto”, then `max_features=n_features`.
* If “sqrt”, then `max_features=sqrt(n_features)`.
* If “log2”, then `max_features=log2(n_features)`.
* If None, then `max_features=n_features`.
Changed in version 1.1: The default of `max_features` changed from `"auto"` to `1.0`.
Deprecated since version 1.1: The `"auto"` option was deprecated in 1.1 and will be removed in 1.3.
Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than `max_features` features.
**random\_state**int, RandomState instance or None, default=None
Used to pick randomly the `max_features` used at each split. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state) for details.
**min\_impurity\_decrease**float, default=0.0
A node will be split if this split induces a decrease of the impurity greater than or equal to this value.
The weighted impurity decrease equation is the following:
```
N_t / N * (impurity - N_t_R / N_t * right_impurity
- N_t_L / N_t * left_impurity)
```
where `N` is the total number of samples, `N_t` is the number of samples at the current node, `N_t_L` is the number of samples in the left child, and `N_t_R` is the number of samples in the right child.
`N`, `N_t`, `N_t_R` and `N_t_L` all refer to the weighted sum, if `sample_weight` is passed.
New in version 0.19.
**max\_leaf\_nodes**int, default=None
Grow a tree with `max_leaf_nodes` in best-first fashion. Best nodes are defined as relative reduction in impurity. If None then unlimited number of leaf nodes.
**ccp\_alpha**non-negative float, default=0.0
Complexity parameter used for Minimal Cost-Complexity Pruning. The subtree with the largest cost complexity that is smaller than `ccp_alpha` will be chosen. By default, no pruning is performed. See [Minimal Cost-Complexity Pruning](../tree#minimal-cost-complexity-pruning) for details.
New in version 0.22.
Attributes:
**max\_features\_**int
The inferred value of max\_features.
[`n_features_`](#sklearn.tree.ExtraTreeRegressor.n_features_ "sklearn.tree.ExtraTreeRegressor.n_features_")int
DEPRECATED: The attribute `n_features_` is deprecated in 1.0 and will be removed in 1.2.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
[`feature_importances_`](#sklearn.tree.ExtraTreeRegressor.feature_importances_ "sklearn.tree.ExtraTreeRegressor.feature_importances_")ndarray of shape (n\_features,)
Return the feature importances.
**n\_outputs\_**int
The number of outputs when `fit` is performed.
**tree\_**Tree instance
The underlying Tree object. Please refer to `help(sklearn.tree._tree.Tree)` for attributes of Tree object and [Understanding the decision tree structure](../../auto_examples/tree/plot_unveil_tree_structure#sphx-glr-auto-examples-tree-plot-unveil-tree-structure-py) for basic usage of these attributes.
See also
[`ExtraTreeClassifier`](sklearn.tree.extratreeclassifier#sklearn.tree.ExtraTreeClassifier "sklearn.tree.ExtraTreeClassifier")
An extremely randomized tree classifier.
[`sklearn.ensemble.ExtraTreesClassifier`](sklearn.ensemble.extratreesclassifier#sklearn.ensemble.ExtraTreesClassifier "sklearn.ensemble.ExtraTreesClassifier")
An extra-trees classifier.
[`sklearn.ensemble.ExtraTreesRegressor`](sklearn.ensemble.extratreesregressor#sklearn.ensemble.ExtraTreesRegressor "sklearn.ensemble.ExtraTreesRegressor")
An extra-trees regressor.
#### Notes
The default values for the parameters controlling the size of the trees (e.g. `max_depth`, `min_samples_leaf`, etc.) lead to fully grown and unpruned trees which can potentially be very large on some data sets. To reduce memory consumption, the complexity and size of the trees should be controlled by setting those parameter values.
#### References
[1] P. Geurts, D. Ernst., and L. Wehenkel, “Extremely randomized trees”, Machine Learning, 63(1), 3-42, 2006.
#### Examples
```
>>> from sklearn.datasets import load_diabetes
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.ensemble import BaggingRegressor
>>> from sklearn.tree import ExtraTreeRegressor
>>> X, y = load_diabetes(return_X_y=True)
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, random_state=0)
>>> extra_tree = ExtraTreeRegressor(random_state=0)
>>> reg = BaggingRegressor(extra_tree, random_state=0).fit(
... X_train, y_train)
>>> reg.score(X_test, y_test)
0.33...
```
#### Methods
| | |
| --- | --- |
| [`apply`](#sklearn.tree.ExtraTreeRegressor.apply "sklearn.tree.ExtraTreeRegressor.apply")(X[, check\_input]) | Return the index of the leaf that each sample is predicted as. |
| [`cost_complexity_pruning_path`](#sklearn.tree.ExtraTreeRegressor.cost_complexity_pruning_path "sklearn.tree.ExtraTreeRegressor.cost_complexity_pruning_path")(X, y[, ...]) | Compute the pruning path during Minimal Cost-Complexity Pruning. |
| [`decision_path`](#sklearn.tree.ExtraTreeRegressor.decision_path "sklearn.tree.ExtraTreeRegressor.decision_path")(X[, check\_input]) | Return the decision path in the tree. |
| [`fit`](#sklearn.tree.ExtraTreeRegressor.fit "sklearn.tree.ExtraTreeRegressor.fit")(X, y[, sample\_weight, check\_input]) | Build a decision tree regressor from the training set (X, y). |
| [`get_depth`](#sklearn.tree.ExtraTreeRegressor.get_depth "sklearn.tree.ExtraTreeRegressor.get_depth")() | Return the depth of the decision tree. |
| [`get_n_leaves`](#sklearn.tree.ExtraTreeRegressor.get_n_leaves "sklearn.tree.ExtraTreeRegressor.get_n_leaves")() | Return the number of leaves of the decision tree. |
| [`get_params`](#sklearn.tree.ExtraTreeRegressor.get_params "sklearn.tree.ExtraTreeRegressor.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.tree.ExtraTreeRegressor.predict "sklearn.tree.ExtraTreeRegressor.predict")(X[, check\_input]) | Predict class or regression value for X. |
| [`score`](#sklearn.tree.ExtraTreeRegressor.score "sklearn.tree.ExtraTreeRegressor.score")(X, y[, sample\_weight]) | Return the coefficient of determination of the prediction. |
| [`set_params`](#sklearn.tree.ExtraTreeRegressor.set_params "sklearn.tree.ExtraTreeRegressor.set_params")(\*\*params) | Set the parameters of this estimator. |
apply(*X*, *check\_input=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/tree/_classes.py#L532)
Return the index of the leaf that each sample is predicted as.
New in version 0.17.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The input samples. Internally, it will be converted to `dtype=np.float32` and if a sparse matrix is provided to a sparse `csr_matrix`.
**check\_input**bool, default=True
Allow to bypass several input checking. Don’t use this parameter unless you know what you do.
Returns:
**X\_leaves**array-like of shape (n\_samples,)
For each datapoint x in X, return the index of the leaf x ends up in. Leaves are numbered within `[0; self.tree_.node_count)`, possibly with gaps in the numbering.
cost\_complexity\_pruning\_path(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/tree/_classes.py#L607)
Compute the pruning path during Minimal Cost-Complexity Pruning.
See [Minimal Cost-Complexity Pruning](../tree#minimal-cost-complexity-pruning) for details on the pruning process.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The training input samples. Internally, it will be converted to `dtype=np.float32` and if a sparse matrix is provided to a sparse `csc_matrix`.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
The target values (class labels) as integers or strings.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights. If None, then samples are equally weighted. Splits that would create child nodes with net zero or negative weight are ignored while searching for a split in each node. Splits are also ignored if they would result in any single class carrying a negative weight in either child node.
Returns:
**ccp\_path**[`Bunch`](sklearn.utils.bunch#sklearn.utils.Bunch "sklearn.utils.Bunch")
Dictionary-like object, with the following attributes.
ccp\_alphasndarray
Effective alphas of subtree during pruning.
impuritiesndarray
Sum of the impurities of the subtree leaves for the corresponding alpha value in `ccp_alphas`.
decision\_path(*X*, *check\_input=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/tree/_classes.py#L560)
Return the decision path in the tree.
New in version 0.18.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The input samples. Internally, it will be converted to `dtype=np.float32` and if a sparse matrix is provided to a sparse `csr_matrix`.
**check\_input**bool, default=True
Allow to bypass several input checking. Don’t use this parameter unless you know what you do.
Returns:
**indicator**sparse matrix of shape (n\_samples, n\_nodes)
Return a node indicator CSR matrix where non zero elements indicates that the samples goes through the nodes.
*property*feature\_importances\_
Return the feature importances.
The importance of a feature is computed as the (normalized) total reduction of the criterion brought by that feature. It is also known as the Gini importance.
Warning: impurity-based feature importances can be misleading for high cardinality features (many unique values). See [`sklearn.inspection.permutation_importance`](sklearn.inspection.permutation_importance#sklearn.inspection.permutation_importance "sklearn.inspection.permutation_importance") as an alternative.
Returns:
**feature\_importances\_**ndarray of shape (n\_features,)
Normalized total reduction of criteria by feature (Gini importance).
fit(*X*, *y*, *sample\_weight=None*, *check\_input=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/tree/_classes.py#L1313)
Build a decision tree regressor from the training set (X, y).
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The training input samples. Internally, it will be converted to `dtype=np.float32` and if a sparse matrix is provided to a sparse `csc_matrix`.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
The target values (real numbers). Use `dtype=np.float64` and `order='C'` for maximum efficiency.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights. If None, then samples are equally weighted. Splits that would create child nodes with net zero or negative weight are ignored while searching for a split in each node.
**check\_input**bool, default=True
Allow to bypass several input checking. Don’t use this parameter unless you know what you do.
Returns:
**self**DecisionTreeRegressor
Fitted estimator.
get\_depth()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/tree/_classes.py#L130)
Return the depth of the decision tree.
The depth of a tree is the maximum distance between the root and any leaf.
Returns:
**self.tree\_.max\_depth**int
The maximum depth of the tree.
get\_n\_leaves()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/tree/_classes.py#L144)
Return the number of leaves of the decision tree.
Returns:
**self.tree\_.n\_leaves**int
Number of leaves.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
*property*n\_features\_
DEPRECATED: The attribute `n_features_` is deprecated in 1.0 and will be removed in 1.2. Use `n_features_in_` instead.
predict(*X*, *check\_input=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/tree/_classes.py#L481)
Predict class or regression value for X.
For a classification model, the predicted class for each sample in X is returned. For a regression model, the predicted value based on X is returned.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The input samples. Internally, it will be converted to `dtype=np.float32` and if a sparse matrix is provided to a sparse `csr_matrix`.
**check\_input**bool, default=True
Allow to bypass several input checking. Don’t use this parameter unless you know what you do.
Returns:
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
The predicted classes, or the predict values.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L677)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares `((y_true - y_pred)** 2).sum()` and \(v\) is the total sum of squares `((y_true - y_true.mean()) ** 2).sum()`. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of `y`, disregarding the input features, would get a \(R^2\) score of 0.0.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape `(n_samples, n_samples_fitted)`, where `n_samples_fitted` is the number of samples used in the fitting for the estimator.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True values for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
\(R^2\) of `self.predict(X)` wrt. `y`.
#### Notes
The \(R^2\) score used when calling `score` on a regressor uses `multioutput='uniform_average'` from version 0.23 to keep consistent with default value of [`r2_score`](sklearn.metrics.r2_score#sklearn.metrics.r2_score "sklearn.metrics.r2_score"). This influences the `score` method of all the multioutput regressors (except for [`MultiOutputRegressor`](sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor")).
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
| programming_docs |
scikit_learn sklearn.datasets.make_sparse_spd_matrix sklearn.datasets.make\_sparse\_spd\_matrix
==========================================
sklearn.datasets.make\_sparse\_spd\_matrix(*dim=1*, *\**, *alpha=0.95*, *norm\_diag=False*, *smallest\_coef=0.1*, *largest\_coef=0.9*, *random\_state=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/datasets/_samples_generator.py#L1419)
Generate a sparse symmetric definite positive matrix.
Read more in the [User Guide](https://scikit-learn.org/1.1/datasets/sample_generators.html#sample-generators).
Parameters:
**dim**int, default=1
The size of the random matrix to generate.
**alpha**float, default=0.95
The probability that a coefficient is zero (see notes). Larger values enforce more sparsity. The value should be in the range 0 and 1.
**norm\_diag**bool, default=False
Whether to normalize the output matrix to make the leading diagonal elements all 1.
**smallest\_coef**float, default=0.1
The value of the smallest coefficient between 0 and 1.
**largest\_coef**float, default=0.9
The value of the largest coefficient between 0 and 1.
**random\_state**int, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
Returns:
**prec**sparse matrix of shape (dim, dim)
The generated matrix.
See also
[`make_spd_matrix`](sklearn.datasets.make_spd_matrix#sklearn.datasets.make_spd_matrix "sklearn.datasets.make_spd_matrix")
Generate a random symmetric, positive-definite matrix.
#### Notes
The sparsity is actually imposed on the cholesky factor of the matrix. Thus alpha does not translate directly into the filling fraction of the matrix itself.
Examples using `sklearn.datasets.make_sparse_spd_matrix`
--------------------------------------------------------
[Sparse inverse covariance estimation](../../auto_examples/covariance/plot_sparse_cov#sphx-glr-auto-examples-covariance-plot-sparse-cov-py)
scikit_learn sklearn.cross_decomposition.CCA sklearn.cross\_decomposition.CCA
================================
*class*sklearn.cross\_decomposition.CCA(*n\_components=2*, *\**, *scale=True*, *max\_iter=500*, *tol=1e-06*, *copy=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cross_decomposition/_pls.py#L793)
Canonical Correlation Analysis, also known as “Mode B” PLS.
Read more in the [User Guide](../cross_decomposition#cross-decomposition).
Parameters:
**n\_components**int, default=2
Number of components to keep. Should be in `[1, min(n_samples,
n_features, n_targets)]`.
**scale**bool, default=True
Whether to scale `X` and `Y`.
**max\_iter**int, default=500
The maximum number of iterations of the power method.
**tol**float, default=1e-06
The tolerance used as convergence criteria in the power method: the algorithm stops whenever the squared norm of `u_i - u_{i-1}` is less than `tol`, where `u` corresponds to the left singular vector.
**copy**bool, default=True
Whether to copy `X` and `Y` in fit before applying centering, and potentially scaling. If False, these operations will be done inplace, modifying both arrays.
Attributes:
**x\_weights\_**ndarray of shape (n\_features, n\_components)
The left singular vectors of the cross-covariance matrices of each iteration.
**y\_weights\_**ndarray of shape (n\_targets, n\_components)
The right singular vectors of the cross-covariance matrices of each iteration.
**x\_loadings\_**ndarray of shape (n\_features, n\_components)
The loadings of `X`.
**y\_loadings\_**ndarray of shape (n\_targets, n\_components)
The loadings of `Y`.
**x\_rotations\_**ndarray of shape (n\_features, n\_components)
The projection matrix used to transform `X`.
**y\_rotations\_**ndarray of shape (n\_features, n\_components)
The projection matrix used to transform `Y`.
[`coef_`](#sklearn.cross_decomposition.CCA.coef_ "sklearn.cross_decomposition.CCA.coef_")ndarray of shape (n\_features, n\_targets)
The coefficients of the linear model.
**intercept\_**ndarray of shape (n\_targets,)
The intercepts of the linear model such that `Y` is approximated as `Y = X @ coef_ + intercept_`.
New in version 1.1.
**n\_iter\_**list of shape (n\_components,)
Number of iterations of the power method, for each component.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`PLSCanonical`](sklearn.cross_decomposition.plscanonical#sklearn.cross_decomposition.PLSCanonical "sklearn.cross_decomposition.PLSCanonical")
Partial Least Squares transformer and regressor.
[`PLSSVD`](sklearn.cross_decomposition.plssvd#sklearn.cross_decomposition.PLSSVD "sklearn.cross_decomposition.PLSSVD")
Partial Least Square SVD.
#### Examples
```
>>> from sklearn.cross_decomposition import CCA
>>> X = [[0., 0., 1.], [1.,0.,0.], [2.,2.,2.], [3.,5.,4.]]
>>> Y = [[0.1, -0.2], [0.9, 1.1], [6.2, 5.9], [11.9, 12.3]]
>>> cca = CCA(n_components=1)
>>> cca.fit(X, Y)
CCA(n_components=1)
>>> X_c, Y_c = cca.transform(X, Y)
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.cross_decomposition.CCA.fit "sklearn.cross_decomposition.CCA.fit")(X, Y) | Fit model to data. |
| [`fit_transform`](#sklearn.cross_decomposition.CCA.fit_transform "sklearn.cross_decomposition.CCA.fit_transform")(X[, y]) | Learn and apply the dimension reduction on the train data. |
| [`get_feature_names_out`](#sklearn.cross_decomposition.CCA.get_feature_names_out "sklearn.cross_decomposition.CCA.get_feature_names_out")([input\_features]) | Get output feature names for transformation. |
| [`get_params`](#sklearn.cross_decomposition.CCA.get_params "sklearn.cross_decomposition.CCA.get_params")([deep]) | Get parameters for this estimator. |
| [`inverse_transform`](#sklearn.cross_decomposition.CCA.inverse_transform "sklearn.cross_decomposition.CCA.inverse_transform")(X[, Y]) | Transform data back to its original space. |
| [`predict`](#sklearn.cross_decomposition.CCA.predict "sklearn.cross_decomposition.CCA.predict")(X[, copy]) | Predict targets of given samples. |
| [`score`](#sklearn.cross_decomposition.CCA.score "sklearn.cross_decomposition.CCA.score")(X, y[, sample\_weight]) | Return the coefficient of determination of the prediction. |
| [`set_params`](#sklearn.cross_decomposition.CCA.set_params "sklearn.cross_decomposition.CCA.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.cross_decomposition.CCA.transform "sklearn.cross_decomposition.CCA.transform")(X[, Y, copy]) | Apply the dimension reduction. |
*property*coef\_
The coefficients of the linear model.
fit(*X*, *Y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cross_decomposition/_pls.py#L198)
Fit model to data.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training vectors, where `n_samples` is the number of samples and `n_features` is the number of predictors.
**Y**array-like of shape (n\_samples,) or (n\_samples, n\_targets)
Target vectors, where `n_samples` is the number of samples and `n_targets` is the number of response variables.
Returns:
**self**object
Fitted model.
fit\_transform(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cross_decomposition/_pls.py#L479)
Learn and apply the dimension reduction on the train data.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training vectors, where `n_samples` is the number of samples and `n_features` is the number of predictors.
**y**array-like of shape (n\_samples, n\_targets), default=None
Target vectors, where `n_samples` is the number of samples and `n_targets` is the number of response variables.
Returns:
**self**ndarray of shape (n\_samples, n\_components)
Return `x_scores` if `Y` is not given, `(x_scores, y_scores)` otherwise.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L909)
Get output feature names for transformation.
Parameters:
**input\_features**array-like of str or None, default=None
Only used to validate feature names with the names seen in [`fit`](#sklearn.cross_decomposition.CCA.fit "sklearn.cross_decomposition.CCA.fit").
Returns:
**feature\_names\_out**ndarray of str objects
Transformed feature names.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
inverse\_transform(*X*, *Y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cross_decomposition/_pls.py#L404)
Transform data back to its original space.
Parameters:
**X**array-like of shape (n\_samples, n\_components)
New data, where `n_samples` is the number of samples and `n_components` is the number of pls components.
**Y**array-like of shape (n\_samples, n\_components)
New target, where `n_samples` is the number of samples and `n_components` is the number of pls components.
Returns:
**X\_reconstructed**ndarray of shape (n\_samples, n\_features)
Return the reconstructed `X` data.
**Y\_reconstructed**ndarray of shape (n\_samples, n\_targets)
Return the reconstructed `X` target. Only returned when `Y` is given.
#### Notes
This transformation will only be exact if `n_components=n_features`.
predict(*X*, *copy=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cross_decomposition/_pls.py#L448)
Predict targets of given samples.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Samples.
**copy**bool, default=True
Whether to copy `X` and `Y`, or perform in-place normalization.
Returns:
**y\_pred**ndarray of shape (n\_samples,) or (n\_samples, n\_targets)
Returns predicted values.
#### Notes
This call requires the estimation of a matrix of shape `(n_features, n_targets)`, which may be an issue in high dimensional space.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L677)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares `((y_true - y_pred)** 2).sum()` and \(v\) is the total sum of squares `((y_true - y_true.mean()) ** 2).sum()`. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of `y`, disregarding the input features, would get a \(R^2\) score of 0.0.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape `(n_samples, n_samples_fitted)`, where `n_samples_fitted` is the number of samples used in the fitting for the estimator.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True values for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
\(R^2\) of `self.predict(X)` wrt. `y`.
#### Notes
The \(R^2\) score used when calling `score` on a regressor uses `multioutput='uniform_average'` from version 0.23 to keep consistent with default value of [`r2_score`](sklearn.metrics.r2_score#sklearn.metrics.r2_score "sklearn.metrics.r2_score"). This influences the `score` method of all the multioutput regressors (except for [`MultiOutputRegressor`](sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor")).
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*, *Y=None*, *copy=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cross_decomposition/_pls.py#L365)
Apply the dimension reduction.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Samples to transform.
**Y**array-like of shape (n\_samples, n\_targets), default=None
Target vectors.
**copy**bool, default=True
Whether to copy `X` and `Y`, or perform in-place normalization.
Returns:
**x\_scores, y\_scores**array-like or tuple of array-like
Return `x_scores` if `Y` is not given, `(x_scores, y_scores)` otherwise.
Examples using `sklearn.cross_decomposition.CCA`
------------------------------------------------
[Compare cross decomposition methods](../../auto_examples/cross_decomposition/plot_compare_cross_decomposition#sphx-glr-auto-examples-cross-decomposition-plot-compare-cross-decomposition-py)
[Multilabel classification](../../auto_examples/miscellaneous/plot_multilabel#sphx-glr-auto-examples-miscellaneous-plot-multilabel-py)
scikit_learn sklearn.cluster.mean_shift sklearn.cluster.mean\_shift
===========================
sklearn.cluster.mean\_shift(*X*, *\**, *bandwidth=None*, *seeds=None*, *bin\_seeding=False*, *min\_bin\_freq=1*, *cluster\_all=True*, *max\_iter=300*, *n\_jobs=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/cluster/_mean_shift.py#L110)
Perform mean shift clustering of data using a flat kernel.
Read more in the [User Guide](../clustering#mean-shift).
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input data.
**bandwidth**float, default=None
Kernel bandwidth.
If bandwidth is not given, it is determined using a heuristic based on the median of all pairwise distances. This will take quadratic time in the number of samples. The sklearn.cluster.estimate\_bandwidth function can be used to do this more efficiently.
**seeds**array-like of shape (n\_seeds, n\_features) or None
Point used as initial kernel locations. If None and bin\_seeding=False, each data point is used as a seed. If None and bin\_seeding=True, see bin\_seeding.
**bin\_seeding**bool, default=False
If true, initial kernel locations are not locations of all points, but rather the location of the discretized version of points, where points are binned onto a grid whose coarseness corresponds to the bandwidth. Setting this option to True will speed up the algorithm because fewer seeds will be initialized. Ignored if seeds argument is not None.
**min\_bin\_freq**int, default=1
To speed up the algorithm, accept only those bins with at least min\_bin\_freq points as seeds.
**cluster\_all**bool, default=True
If true, then all points are clustered, even those orphans that are not within any kernel. Orphans are assigned to the nearest kernel. If false, then orphans are given cluster label -1.
**max\_iter**int, default=300
Maximum number of iterations, per seed point before the clustering operation terminates (for that seed point), if has not converged yet.
**n\_jobs**int, default=None
The number of jobs to use for the computation. This works by computing each of the n\_init runs in parallel.
`None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
New in version 0.17: Parallel Execution using *n\_jobs*.
Returns:
**cluster\_centers**ndarray of shape (n\_clusters, n\_features)
Coordinates of cluster centers.
**labels**ndarray of shape (n\_samples,)
Cluster labels for each point.
#### Notes
For an example, see [examples/cluster/plot\_mean\_shift.py](../../auto_examples/cluster/plot_mean_shift#sphx-glr-auto-examples-cluster-plot-mean-shift-py).
scikit_learn sklearn.datasets.make_friedman1 sklearn.datasets.make\_friedman1
================================
sklearn.datasets.make\_friedman1(*n\_samples=100*, *n\_features=10*, *\**, *noise=0.0*, *random\_state=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/datasets/_samples_generator.py#L963)
Generate the “Friedman #1” regression problem.
This dataset is described in Friedman [1] and Breiman [2].
Inputs `X` are independent features uniformly distributed on the interval [0, 1]. The output `y` is created according to the formula:
```
y(X) = 10 * sin(pi * X[:, 0] * X[:, 1]) + 20 * (X[:, 2] - 0.5) ** 2 + 10 * X[:, 3] + 5 * X[:, 4] + noise * N(0, 1).
```
Out of the `n_features` features, only 5 are actually used to compute `y`. The remaining features are independent of `y`.
The number of features has to be >= 5.
Read more in the [User Guide](https://scikit-learn.org/1.1/datasets/sample_generators.html#sample-generators).
Parameters:
**n\_samples**int, default=100
The number of samples.
**n\_features**int, default=10
The number of features. Should be at least 5.
**noise**float, default=0.0
The standard deviation of the gaussian noise applied to the output.
**random\_state**int, RandomState instance or None, default=None
Determines random number generation for dataset noise. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
Returns:
**X**ndarray of shape (n\_samples, n\_features)
The input samples.
**y**ndarray of shape (n\_samples,)
The output values.
#### References
[1] J. Friedman, “Multivariate adaptive regression splines”, The Annals of Statistics 19 (1), pages 1-67, 1991.
[2] L. Breiman, “Bagging predictors”, Machine Learning 24, pages 123-140, 1996.
scikit_learn sklearn.naive_bayes.ComplementNB sklearn.naive\_bayes.ComplementNB
=================================
*class*sklearn.naive\_bayes.ComplementNB(*\**, *alpha=1.0*, *fit\_prior=True*, *class\_prior=None*, *norm=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/naive_bayes.py#L869)
The Complement Naive Bayes classifier described in Rennie et al. (2003).
The Complement Naive Bayes classifier was designed to correct the “severe assumptions” made by the standard Multinomial Naive Bayes classifier. It is particularly suited for imbalanced data sets.
Read more in the [User Guide](../naive_bayes#complement-naive-bayes).
New in version 0.20.
Parameters:
**alpha**float, default=1.0
Additive (Laplace/Lidstone) smoothing parameter (0 for no smoothing).
**fit\_prior**bool, default=True
Only used in edge case with a single class in the training set.
**class\_prior**array-like of shape (n\_classes,), default=None
Prior probabilities of the classes. Not used.
**norm**bool, default=False
Whether or not a second normalization of the weights is performed. The default behavior mirrors the implementations found in Mahout and Weka, which do not follow the full algorithm described in Table 9 of the paper.
Attributes:
**class\_count\_**ndarray of shape (n\_classes,)
Number of samples encountered for each class during fitting. This value is weighted by the sample weight when provided.
**class\_log\_prior\_**ndarray of shape (n\_classes,)
Smoothed empirical log probability for each class. Only used in edge case with a single class in the training set.
**classes\_**ndarray of shape (n\_classes,)
Class labels known to the classifier
**feature\_all\_**ndarray of shape (n\_features,)
Number of samples encountered for each feature during fitting. This value is weighted by the sample weight when provided.
**feature\_count\_**ndarray of shape (n\_classes, n\_features)
Number of samples encountered for each (class, feature) during fitting. This value is weighted by the sample weight when provided.
**feature\_log\_prob\_**ndarray of shape (n\_classes, n\_features)
Empirical weights for class complements.
[`n_features_`](#sklearn.naive_bayes.ComplementNB.n_features_ "sklearn.naive_bayes.ComplementNB.n_features_")int
DEPRECATED: Attribute `n_features_` was deprecated in version 1.0 and will be removed in 1.2.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`BernoulliNB`](sklearn.naive_bayes.bernoullinb#sklearn.naive_bayes.BernoulliNB "sklearn.naive_bayes.BernoulliNB")
Naive Bayes classifier for multivariate Bernoulli models.
[`CategoricalNB`](sklearn.naive_bayes.categoricalnb#sklearn.naive_bayes.CategoricalNB "sklearn.naive_bayes.CategoricalNB")
Naive Bayes classifier for categorical features.
[`GaussianNB`](sklearn.naive_bayes.gaussiannb#sklearn.naive_bayes.GaussianNB "sklearn.naive_bayes.GaussianNB")
Gaussian Naive Bayes.
[`MultinomialNB`](sklearn.naive_bayes.multinomialnb#sklearn.naive_bayes.MultinomialNB "sklearn.naive_bayes.MultinomialNB")
Naive Bayes classifier for multinomial models.
#### References
Rennie, J. D., Shih, L., Teevan, J., & Karger, D. R. (2003). Tackling the poor assumptions of naive bayes text classifiers. In ICML (Vol. 3, pp. 616-623). <https://people.csail.mit.edu/jrennie/papers/icml03-nb.pdf>
#### Examples
```
>>> import numpy as np
>>> rng = np.random.RandomState(1)
>>> X = rng.randint(5, size=(6, 100))
>>> y = np.array([1, 2, 3, 4, 5, 6])
>>> from sklearn.naive_bayes import ComplementNB
>>> clf = ComplementNB()
>>> clf.fit(X, y)
ComplementNB()
>>> print(clf.predict(X[2:3]))
[3]
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.naive_bayes.ComplementNB.fit "sklearn.naive_bayes.ComplementNB.fit")(X, y[, sample\_weight]) | Fit Naive Bayes classifier according to X, y. |
| [`get_params`](#sklearn.naive_bayes.ComplementNB.get_params "sklearn.naive_bayes.ComplementNB.get_params")([deep]) | Get parameters for this estimator. |
| [`partial_fit`](#sklearn.naive_bayes.ComplementNB.partial_fit "sklearn.naive_bayes.ComplementNB.partial_fit")(X, y[, classes, sample\_weight]) | Incremental fit on a batch of samples. |
| [`predict`](#sklearn.naive_bayes.ComplementNB.predict "sklearn.naive_bayes.ComplementNB.predict")(X) | Perform classification on an array of test vectors X. |
| [`predict_log_proba`](#sklearn.naive_bayes.ComplementNB.predict_log_proba "sklearn.naive_bayes.ComplementNB.predict_log_proba")(X) | Return log-probability estimates for the test vector X. |
| [`predict_proba`](#sklearn.naive_bayes.ComplementNB.predict_proba "sklearn.naive_bayes.ComplementNB.predict_proba")(X) | Return probability estimates for the test vector X. |
| [`score`](#sklearn.naive_bayes.ComplementNB.score "sklearn.naive_bayes.ComplementNB.score")(X, y[, sample\_weight]) | Return the mean accuracy on the given test data and labels. |
| [`set_params`](#sklearn.naive_bayes.ComplementNB.set_params "sklearn.naive_bayes.ComplementNB.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/naive_bayes.py#L679)
Fit Naive Bayes classifier according to X, y.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Training vectors, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**array-like of shape (n\_samples,)
Target values.
**sample\_weight**array-like of shape (n\_samples,), default=None
Weights applied to individual samples (1. for unweighted).
Returns:
**self**object
Returns the instance itself.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
*property*n\_features\_
DEPRECATED: Attribute `n_features_` was deprecated in version 1.0 and will be removed in 1.2. Use `n_features_in_` instead.
partial\_fit(*X*, *y*, *classes=None*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/naive_bayes.py#L598)
Incremental fit on a batch of samples.
This method is expected to be called several times consecutively on different chunks of a dataset so as to implement out-of-core or online learning.
This is especially useful when the whole dataset is too big to fit in memory at once.
This method has some performance overhead hence it is better to call partial\_fit on chunks of data that are as large as possible (as long as fitting in the memory budget) to hide the overhead.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Training vectors, where `n_samples` is the number of samples and `n_features` is the number of features.
**y**array-like of shape (n\_samples,)
Target values.
**classes**array-like of shape (n\_classes,), default=None
List of all the classes that can possibly appear in the y vector.
Must be provided at the first call to partial\_fit, can be omitted in subsequent calls.
**sample\_weight**array-like of shape (n\_samples,), default=None
Weights applied to individual samples (1. for unweighted).
Returns:
**self**object
Returns the instance itself.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/naive_bayes.py#L65)
Perform classification on an array of test vectors X.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The input samples.
Returns:
**C**ndarray of shape (n\_samples,)
Predicted target values for X.
predict\_log\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/naive_bayes.py#L84)
Return log-probability estimates for the test vector X.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The input samples.
Returns:
**C**array-like of shape (n\_samples, n\_classes)
Returns the log-probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute [classes\_](https://scikit-learn.org/1.1/glossary.html#term-classes_).
predict\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/naive_bayes.py#L107)
Return probability estimates for the test vector X.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The input samples.
Returns:
**C**array-like of shape (n\_samples, n\_classes)
Returns the probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute [classes\_](https://scikit-learn.org/1.1/glossary.html#term-classes_).
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L640)
Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True labels for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
Mean accuracy of `self.predict(X)` wrt. `y`.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.naive_bayes.ComplementNB`
-------------------------------------------------
[Classification of text documents using sparse features](../../auto_examples/text/plot_document_classification_20newsgroups#sphx-glr-auto-examples-text-plot-document-classification-20newsgroups-py)
| programming_docs |
scikit_learn sklearn.metrics.pairwise.kernel_metrics sklearn.metrics.pairwise.kernel\_metrics
========================================
sklearn.metrics.pairwise.kernel\_metrics()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/pairwise.py#L2054)
Valid metrics for pairwise\_kernels.
This function simply returns the valid pairwise distance metrics. It exists, however, to allow for a verbose description of the mapping for each of the valid strings.
The valid distance metrics, and the function they map to, are:
| metric | Function |
| --- | --- |
| ‘additive\_chi2’ | sklearn.pairwise.additive\_chi2\_kernel |
| ‘chi2’ | sklearn.pairwise.chi2\_kernel |
| ‘linear’ | sklearn.pairwise.linear\_kernel |
| ‘poly’ | sklearn.pairwise.polynomial\_kernel |
| ‘polynomial’ | sklearn.pairwise.polynomial\_kernel |
| ‘rbf’ | sklearn.pairwise.rbf\_kernel |
| ‘laplacian’ | sklearn.pairwise.laplacian\_kernel |
| ‘sigmoid’ | sklearn.pairwise.sigmoid\_kernel |
| ‘cosine’ | sklearn.pairwise.cosine\_similarity |
Read more in the [User Guide](../metrics#metrics).
Returns:
**kernal\_metrics**dict
Returns valid metrics for pairwise\_kernels.
scikit_learn sklearn.utils.sparsefuncs_fast.inplace_csr_row_normalize_l2 sklearn.utils.sparsefuncs\_fast.inplace\_csr\_row\_normalize\_l2
================================================================
sklearn.utils.sparsefuncs\_fast.inplace\_csr\_row\_normalize\_l2()
Inplace row normalize using the l2 norm
scikit_learn sklearn.covariance.shrunk_covariance sklearn.covariance.shrunk\_covariance
=====================================
sklearn.covariance.shrunk\_covariance(*emp\_cov*, *shrinkage=0.1*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_shrunk_covariance.py#L27)
Calculate a covariance matrix shrunk on the diagonal.
Read more in the [User Guide](../covariance#shrunk-covariance).
Parameters:
**emp\_cov**array-like of shape (n\_features, n\_features)
Covariance matrix to be shrunk.
**shrinkage**float, default=0.1
Coefficient in the convex combination used for the computation of the shrunk estimate. Range is [0, 1].
Returns:
**shrunk\_cov**ndarray of shape (n\_features, n\_features)
Shrunk covariance.
#### Notes
The regularized (shrunk) covariance is given by:
```
(1 - shrinkage) * cov + shrinkage * mu * np.identity(n_features)
```
where `mu = trace(cov) / n_features`.
scikit_learn sklearn.linear_model.SGDOneClassSVM sklearn.linear\_model.SGDOneClassSVM
====================================
*class*sklearn.linear\_model.SGDOneClassSVM(*nu=0.5*, *fit\_intercept=True*, *max\_iter=1000*, *tol=0.001*, *shuffle=True*, *verbose=0*, *random\_state=None*, *learning\_rate='optimal'*, *eta0=0.0*, *power\_t=0.5*, *warm\_start=False*, *average=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_stochastic_gradient.py#L1969)
Solves linear One-Class SVM using Stochastic Gradient Descent.
This implementation is meant to be used with a kernel approximation technique (e.g. `sklearn.kernel_approximation.Nystroem`) to obtain results similar to `sklearn.svm.OneClassSVM` which uses a Gaussian kernel by default.
Read more in the [User Guide](../sgd#sgd-online-one-class-svm).
New in version 1.0.
Parameters:
**nu**float, default=0.5
The nu parameter of the One Class SVM: an upper bound on the fraction of training errors and a lower bound of the fraction of support vectors. Should be in the interval (0, 1]. By default 0.5 will be taken.
**fit\_intercept**bool, default=True
Whether the intercept should be estimated or not. Defaults to True.
**max\_iter**int, default=1000
The maximum number of passes over the training data (aka epochs). It only impacts the behavior in the `fit` method, and not the `partial_fit`. Defaults to 1000.
**tol**float or None, default=1e-3
The stopping criterion. If it is not None, the iterations will stop when (loss > previous\_loss - tol). Defaults to 1e-3.
**shuffle**bool, default=True
Whether or not the training data should be shuffled after each epoch. Defaults to True.
**verbose**int, default=0
The verbosity level.
**random\_state**int, RandomState instance or None, default=None
The seed of the pseudo random number generator to use when shuffling the data. If int, random\_state is the seed used by the random number generator; If RandomState instance, random\_state is the random number generator; If None, the random number generator is the RandomState instance used by `np.random`.
**learning\_rate**{‘constant’, ‘optimal’, ‘invscaling’, ‘adaptive’}, default=’optimal’
The learning rate schedule to use with `fit`. (If using `partial_fit`, learning rate must be controlled directly).
* ‘constant’: `eta = eta0`
* ‘optimal’: `eta = 1.0 / (alpha * (t + t0))` where t0 is chosen by a heuristic proposed by Leon Bottou.
* ‘invscaling’: `eta = eta0 / pow(t, power_t)`
* ‘adaptive’: eta = eta0, as long as the training keeps decreasing. Each time n\_iter\_no\_change consecutive epochs fail to decrease the training loss by tol or fail to increase validation score by tol if early\_stopping is True, the current learning rate is divided by 5.
**eta0**float, default=0.0
The initial learning rate for the ‘constant’, ‘invscaling’ or ‘adaptive’ schedules. The default value is 0.0 as eta0 is not used by the default schedule ‘optimal’.
**power\_t**float, default=0.5
The exponent for inverse scaling learning rate [default 0.5].
**warm\_start**bool, default=False
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See [the Glossary](https://scikit-learn.org/1.1/glossary.html#term-warm_start).
Repeatedly calling fit or partial\_fit when warm\_start is True can result in a different solution than when calling fit a single time because of the way the data is shuffled. If a dynamic learning rate is used, the learning rate is adapted depending on the number of samples already seen. Calling `fit` resets this counter, while `partial_fit` will result in increasing the existing counter.
**average**bool or int, default=False
When set to True, computes the averaged SGD weights and stores the result in the `coef_` attribute. If set to an int greater than 1, averaging will begin once the total number of samples seen reaches average. So `average=10` will begin averaging after seeing 10 samples.
Attributes:
**coef\_**ndarray of shape (1, n\_features)
Weights assigned to the features.
**offset\_**ndarray of shape (1,)
Offset used to define the decision function from the raw scores. We have the relation: decision\_function = score\_samples - offset.
**n\_iter\_**int
The actual number of iterations to reach the stopping criterion.
**t\_**int
Number of weight updates performed during training. Same as `(n_iter_ * n_samples)`.
**loss\_function\_**concrete `LossFunction`
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`sklearn.svm.OneClassSVM`](sklearn.svm.oneclasssvm#sklearn.svm.OneClassSVM "sklearn.svm.OneClassSVM")
Unsupervised Outlier Detection.
#### Notes
This estimator has a linear complexity in the number of training samples and is thus better suited than the `sklearn.svm.OneClassSVM` implementation for datasets with a large number of training samples (say > 10,000).
#### Examples
```
>>> import numpy as np
>>> from sklearn import linear_model
>>> X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]])
>>> clf = linear_model.SGDOneClassSVM(random_state=42)
>>> clf.fit(X)
SGDOneClassSVM(random_state=42)
```
```
>>> print(clf.predict([[4, 4]]))
[1]
```
#### Methods
| | |
| --- | --- |
| [`decision_function`](#sklearn.linear_model.SGDOneClassSVM.decision_function "sklearn.linear_model.SGDOneClassSVM.decision_function")(X) | Signed distance to the separating hyperplane. |
| [`densify`](#sklearn.linear_model.SGDOneClassSVM.densify "sklearn.linear_model.SGDOneClassSVM.densify")() | Convert coefficient matrix to dense array format. |
| [`fit`](#sklearn.linear_model.SGDOneClassSVM.fit "sklearn.linear_model.SGDOneClassSVM.fit")(X[, y, coef\_init, offset\_init, ...]) | Fit linear One-Class SVM with Stochastic Gradient Descent. |
| [`fit_predict`](#sklearn.linear_model.SGDOneClassSVM.fit_predict "sklearn.linear_model.SGDOneClassSVM.fit_predict")(X[, y]) | Perform fit on X and returns labels for X. |
| [`get_params`](#sklearn.linear_model.SGDOneClassSVM.get_params "sklearn.linear_model.SGDOneClassSVM.get_params")([deep]) | Get parameters for this estimator. |
| [`partial_fit`](#sklearn.linear_model.SGDOneClassSVM.partial_fit "sklearn.linear_model.SGDOneClassSVM.partial_fit")(X[, y, sample\_weight]) | Fit linear One-Class SVM with Stochastic Gradient Descent. |
| [`predict`](#sklearn.linear_model.SGDOneClassSVM.predict "sklearn.linear_model.SGDOneClassSVM.predict")(X) | Return labels (1 inlier, -1 outlier) of the samples. |
| [`score_samples`](#sklearn.linear_model.SGDOneClassSVM.score_samples "sklearn.linear_model.SGDOneClassSVM.score_samples")(X) | Raw scoring function of the samples. |
| [`set_params`](#sklearn.linear_model.SGDOneClassSVM.set_params "sklearn.linear_model.SGDOneClassSVM.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`sparsify`](#sklearn.linear_model.SGDOneClassSVM.sparsify "sklearn.linear_model.SGDOneClassSVM.sparsify")() | Convert coefficient matrix to sparse format. |
decision\_function(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_stochastic_gradient.py#L2447)
Signed distance to the separating hyperplane.
Signed distance is positive for an inlier and negative for an outlier.
Parameters:
**X**{array-like, sparse matrix}, shape (n\_samples, n\_features)
Testing data.
Returns:
**dec**array-like, shape (n\_samples,)
Decision function values of the samples.
densify()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L477)
Convert coefficient matrix to dense array format.
Converts the `coef_` member (back) to a numpy.ndarray. This is the default format of `coef_` and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op.
Returns:
self
Fitted estimator.
fit(*X*, *y=None*, *coef\_init=None*, *offset\_init=None*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_stochastic_gradient.py#L2400)
Fit linear One-Class SVM with Stochastic Gradient Descent.
This solves an equivalent optimization problem of the One-Class SVM primal optimization problem and returns a weight vector w and an offset rho such that the decision function is given by <w, x> - rho.
Parameters:
**X**{array-like, sparse matrix}, shape (n\_samples, n\_features)
Training data.
**y**Ignored
Not used, present for API consistency by convention.
**coef\_init**array, shape (n\_classes, n\_features)
The initial coefficients to warm-start the optimization.
**offset\_init**array, shape (n\_classes,)
The initial offset to warm-start the optimization.
**sample\_weight**array-like, shape (n\_samples,), optional
Weights applied to individual samples. If not provided, uniform weights are assumed. These weights will be multiplied with class\_weight (passed through the constructor) if class\_weight is specified.
Returns:
**self**object
Returns a fitted instance of self.
fit\_predict(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L956)
Perform fit on X and returns labels for X.
Returns -1 for outliers and 1 for inliers.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The input samples.
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**y**ndarray of shape (n\_samples,)
1 for inliers, -1 for outliers.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
partial\_fit(*X*, *y=None*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_stochastic_gradient.py#L2314)
Fit linear One-Class SVM with Stochastic Gradient Descent.
Parameters:
**X**{array-like, sparse matrix}, shape (n\_samples, n\_features)
Subset of the training data.
**y**Ignored
Not used, present for API consistency by convention.
**sample\_weight**array-like, shape (n\_samples,), optional
Weights applied to individual samples. If not provided, uniform weights are assumed.
Returns:
**self**object
Returns a fitted instance of self.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_stochastic_gradient.py#L2487)
Return labels (1 inlier, -1 outlier) of the samples.
Parameters:
**X**{array-like, sparse matrix}, shape (n\_samples, n\_features)
Testing data.
Returns:
**y**array, shape (n\_samples,)
Labels of the samples.
score\_samples(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_stochastic_gradient.py#L2471)
Raw scoring function of the samples.
Parameters:
**X**{array-like, sparse matrix}, shape (n\_samples, n\_features)
Testing data.
Returns:
**score\_samples**array-like, shape (n\_samples,)
Unshiffted scoring function values of the samples.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
sparsify()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L497)
Convert coefficient matrix to sparse format.
Converts the `coef_` member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation.
The `intercept_` member is not converted.
Returns:
self
Fitted estimator.
#### Notes
For non-sparse models, i.e. when there are not many zeros in `coef_`, this may actually *increase* memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with `(coef_ == 0).sum()`, must be more than 50% for this to provide significant benefits.
After calling this method, further fitting with the partial\_fit method (if any) will not work until you call densify.
Examples using `sklearn.linear_model.SGDOneClassSVM`
----------------------------------------------------
[One-Class SVM versus One-Class SVM using Stochastic Gradient Descent](../../auto_examples/linear_model/plot_sgdocsvm_vs_ocsvm#sphx-glr-auto-examples-linear-model-plot-sgdocsvm-vs-ocsvm-py)
[Comparing anomaly detection algorithms for outlier detection on toy datasets](../../auto_examples/miscellaneous/plot_anomaly_comparison#sphx-glr-auto-examples-miscellaneous-plot-anomaly-comparison-py)
scikit_learn sklearn.metrics.pairwise.haversine_distances sklearn.metrics.pairwise.haversine\_distances
=============================================
sklearn.metrics.pairwise.haversine\_distances(*X*, *Y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/pairwise.py#L821)
Compute the Haversine distance between samples in X and Y.
The Haversine (or great circle) distance is the angular distance between two points on the surface of a sphere. The first coordinate of each point is assumed to be the latitude, the second is the longitude, given in radians. The dimension of the data must be 2.
\[D(x, y) = 2\arcsin[\sqrt{\sin^2((x1 - y1) / 2) + \cos(x1)\cos(y1)\sin^2((x2 - y2) / 2)}]\] Parameters:
**X**array-like of shape (n\_samples\_X, 2)
A feature array.
**Y**array-like of shape (n\_samples\_Y, 2), default=None
An optional second feature array. If `None`, uses `Y=X`.
Returns:
**distance**ndarray of shape (n\_samples\_X, n\_samples\_Y)
The distance matrix.
#### Notes
As the Earth is nearly spherical, the haversine formula provides a good approximation of the distance between two points of the Earth surface, with a less than 1% error on average.
#### Examples
We want to calculate the distance between the Ezeiza Airport (Buenos Aires, Argentina) and the Charles de Gaulle Airport (Paris, France).
```
>>> from sklearn.metrics.pairwise import haversine_distances
>>> from math import radians
>>> bsas = [-34.83333, -58.5166646]
>>> paris = [49.0083899664, 2.53844117956]
>>> bsas_in_radians = [radians(_) for _ in bsas]
>>> paris_in_radians = [radians(_) for _ in paris]
>>> result = haversine_distances([bsas_in_radians, paris_in_radians])
>>> result * 6371000/1000 # multiply by Earth radius to get kilometers
array([[ 0. , 11099.54035582],
[11099.54035582, 0. ]])
```
scikit_learn sklearn.model_selection.check_cv sklearn.model\_selection.check\_cv
==================================
sklearn.model\_selection.check\_cv(*cv=5*, *y=None*, *\**, *classifier=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/model_selection/_split.py#L2290)
Input checker utility for building a cross-validator.
Parameters:
**cv**int, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are: - None, to use the default 5-fold cross validation, - integer, to specify the number of folds. - [CV splitter](https://scikit-learn.org/1.1/glossary.html#term-CV-splitter), - An iterable that generates (train, test) splits as arrays of indices.
For integer/None inputs, if classifier is True and `y` is either binary or multiclass, [`StratifiedKFold`](sklearn.model_selection.stratifiedkfold#sklearn.model_selection.StratifiedKFold "sklearn.model_selection.StratifiedKFold") is used. In all other cases, [`KFold`](sklearn.model_selection.kfold#sklearn.model_selection.KFold "sklearn.model_selection.KFold") is used.
Refer [User Guide](../cross_validation#cross-validation) for the various cross-validation strategies that can be used here.
Changed in version 0.22: `cv` default value changed from 3-fold to 5-fold.
**y**array-like, default=None
The target variable for supervised learning problems.
**classifier**bool, default=False
Whether the task is a classification task, in which case stratified KFold will be used.
Returns:
**checked\_cv**a cross-validator instance.
The return value is a cross-validator which generates the train/test splits via the `split` method.
| programming_docs |
scikit_learn sklearn.linear_model.TweedieRegressor sklearn.linear\_model.TweedieRegressor
======================================
*class*sklearn.linear\_model.TweedieRegressor(*\**, *power=0.0*, *alpha=1.0*, *fit\_intercept=True*, *link='auto'*, *max\_iter=100*, *tol=0.0001*, *warm\_start=False*, *verbose=0*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_glm/glm.py#L682)
Generalized Linear Model with a Tweedie distribution.
This estimator can be used to model different GLMs depending on the `power` parameter, which determines the underlying distribution.
Read more in the [User Guide](../linear_model#generalized-linear-regression).
New in version 0.23.
Parameters:
**power**float, default=0
The power determines the underlying target distribution according to the following table:
| Power | Distribution |
| --- | --- |
| 0 | Normal |
| 1 | Poisson |
| (1,2) | Compound Poisson Gamma |
| 2 | Gamma |
| 3 | Inverse Gaussian |
For `0 < power < 1`, no distribution exists.
**alpha**float, default=1
Constant that multiplies the penalty term and thus determines the regularization strength. `alpha = 0` is equivalent to unpenalized GLMs. In this case, the design matrix `X` must have full column rank (no collinearities). Values must be in the range `[0.0, inf)`.
**fit\_intercept**bool, default=True
Specifies if a constant (a.k.a. bias or intercept) should be added to the linear predictor (X @ coef + intercept).
**link**{‘auto’, ‘identity’, ‘log’}, default=’auto’
The link function of the GLM, i.e. mapping from linear predictor `X @ coeff + intercept` to prediction `y_pred`. Option ‘auto’ sets the link depending on the chosen `power` parameter as follows:
* ‘identity’ for `power <= 0`, e.g. for the Normal distribution
* ‘log’ for `power > 0`, e.g. for Poisson, Gamma and Inverse Gaussian distributions
**max\_iter**int, default=100
The maximal number of iterations for the solver. Values must be in the range `[1, inf)`.
**tol**float, default=1e-4
Stopping criterion. For the lbfgs solver, the iteration will stop when `max{|g_j|, j = 1, ..., d} <= tol` where `g_j` is the j-th component of the gradient (derivative) of the objective function. Values must be in the range `(0.0, inf)`.
**warm\_start**bool, default=False
If set to `True`, reuse the solution of the previous call to `fit` as initialization for `coef_` and `intercept_` .
**verbose**int, default=0
For the lbfgs solver set verbose to any positive number for verbosity. Values must be in the range `[0, inf)`.
Attributes:
**coef\_**array of shape (n\_features,)
Estimated coefficients for the linear predictor (`X @ coef_ +
intercept_`) in the GLM.
**intercept\_**float
Intercept (a.k.a. bias) added to linear predictor.
**n\_iter\_**int
Actual number of iterations used in the solver.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`PoissonRegressor`](sklearn.linear_model.poissonregressor#sklearn.linear_model.PoissonRegressor "sklearn.linear_model.PoissonRegressor")
Generalized Linear Model with a Poisson distribution.
[`GammaRegressor`](sklearn.linear_model.gammaregressor#sklearn.linear_model.GammaRegressor "sklearn.linear_model.GammaRegressor")
Generalized Linear Model with a Gamma distribution.
#### Examples
```
>>> from sklearn import linear_model
>>> clf = linear_model.TweedieRegressor()
>>> X = [[1, 2], [2, 3], [3, 4], [4, 3]]
>>> y = [2, 3.5, 5, 5.5]
>>> clf.fit(X, y)
TweedieRegressor()
>>> clf.score(X, y)
0.839...
>>> clf.coef_
array([0.599..., 0.299...])
>>> clf.intercept_
1.600...
>>> clf.predict([[1, 1], [3, 4]])
array([2.500..., 4.599...])
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.linear_model.TweedieRegressor.fit "sklearn.linear_model.TweedieRegressor.fit")(X, y[, sample\_weight]) | Fit a Generalized Linear Model. |
| [`get_params`](#sklearn.linear_model.TweedieRegressor.get_params "sklearn.linear_model.TweedieRegressor.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.linear_model.TweedieRegressor.predict "sklearn.linear_model.TweedieRegressor.predict")(X) | Predict using GLM with feature matrix X. |
| [`score`](#sklearn.linear_model.TweedieRegressor.score "sklearn.linear_model.TweedieRegressor.score")(X, y[, sample\_weight]) | Compute D^2, the percentage of deviance explained. |
| [`set_params`](#sklearn.linear_model.TweedieRegressor.set_params "sklearn.linear_model.TweedieRegressor.set_params")(\*\*params) | Set the parameters of this estimator. |
*property*family
DEPRECATED: Attribute `family` was deprecated in version 1.1 and will be removed in 1.3.
Ensure backward compatibility for the time of deprecation.
fit(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_glm/glm.py#L144)
Fit a Generalized Linear Model.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Training data.
**y**array-like of shape (n\_samples,)
Target values.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**self**object
Fitted model.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_glm/glm.py#L333)
Predict using GLM with feature matrix X.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Samples.
Returns:
**y\_pred**array of shape (n\_samples,)
Returns predicted values.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_glm/glm.py#L351)
Compute D^2, the percentage of deviance explained.
D^2 is a generalization of the coefficient of determination R^2. R^2 uses squared error and D^2 uses the deviance of this GLM, see the [User Guide](../model_evaluation#regression-metrics).
D^2 is defined as \(D^2 = 1-\frac{D(y\_{true},y\_{pred})}{D\_{null}}\), \(D\_{null}\) is the null deviance, i.e. the deviance of a model with intercept alone, which corresponds to \(y\_{pred} = \bar{y}\). The mean \(\bar{y}\) is averaged by sample\_weight. Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse).
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Test samples.
**y**array-like of shape (n\_samples,)
True values of target.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
D^2 of self.predict(X) w.r.t. y.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.linear_model.TweedieRegressor`
------------------------------------------------------
[Release Highlights for scikit-learn 0.23](../../auto_examples/release_highlights/plot_release_highlights_0_23_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-0-23-0-py)
[Tweedie regression on insurance claims](../../auto_examples/linear_model/plot_tweedie_regression_insurance_claims#sphx-glr-auto-examples-linear-model-plot-tweedie-regression-insurance-claims-py)
scikit_learn sklearn.discriminant_analysis.LinearDiscriminantAnalysis sklearn.discriminant\_analysis.LinearDiscriminantAnalysis
=========================================================
*class*sklearn.discriminant\_analysis.LinearDiscriminantAnalysis(*solver='svd'*, *shrinkage=None*, *priors=None*, *n\_components=None*, *store\_covariance=False*, *tol=0.0001*, *covariance\_estimator=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/discriminant_analysis.py#L169)
Linear Discriminant Analysis.
A classifier with a linear decision boundary, generated by fitting class conditional densities to the data and using Bayes’ rule.
The model fits a Gaussian density to each class, assuming that all classes share the same covariance matrix.
The fitted model can also be used to reduce the dimensionality of the input by projecting it to the most discriminative directions, using the `transform` method.
New in version 0.17: *LinearDiscriminantAnalysis*.
Read more in the [User Guide](../lda_qda#lda-qda).
Parameters:
**solver**{‘svd’, ‘lsqr’, ‘eigen’}, default=’svd’
Solver to use, possible values:
* ‘svd’: Singular value decomposition (default). Does not compute the covariance matrix, therefore this solver is recommended for data with a large number of features.
* ‘lsqr’: Least squares solution. Can be combined with shrinkage or custom covariance estimator.
* ‘eigen’: Eigenvalue decomposition. Can be combined with shrinkage or custom covariance estimator.
**shrinkage**‘auto’ or float, default=None
Shrinkage parameter, possible values:
* None: no shrinkage (default).
* ‘auto’: automatic shrinkage using the Ledoit-Wolf lemma.
* float between 0 and 1: fixed shrinkage parameter.
This should be left to None if `covariance_estimator` is used. Note that shrinkage works only with ‘lsqr’ and ‘eigen’ solvers.
**priors**array-like of shape (n\_classes,), default=None
The class prior probabilities. By default, the class proportions are inferred from the training data.
**n\_components**int, default=None
Number of components (<= min(n\_classes - 1, n\_features)) for dimensionality reduction. If None, will be set to min(n\_classes - 1, n\_features). This parameter only affects the `transform` method.
**store\_covariance**bool, default=False
If True, explicitly compute the weighted within-class covariance matrix when solver is ‘svd’. The matrix is always computed and stored for the other solvers.
New in version 0.17.
**tol**float, default=1.0e-4
Absolute threshold for a singular value of X to be considered significant, used to estimate the rank of X. Dimensions whose singular values are non-significant are discarded. Only used if solver is ‘svd’.
New in version 0.17.
**covariance\_estimator**covariance estimator, default=None
If not None, `covariance_estimator` is used to estimate the covariance matrices instead of relying on the empirical covariance estimator (with potential shrinkage). The object should have a fit method and a `covariance_` attribute like the estimators in [`sklearn.covariance`](../classes#module-sklearn.covariance "sklearn.covariance"). if None the shrinkage parameter drives the estimate.
This should be left to None if `shrinkage` is used. Note that `covariance_estimator` works only with ‘lsqr’ and ‘eigen’ solvers.
New in version 0.24.
Attributes:
**coef\_**ndarray of shape (n\_features,) or (n\_classes, n\_features)
Weight vector(s).
**intercept\_**ndarray of shape (n\_classes,)
Intercept term.
**covariance\_**array-like of shape (n\_features, n\_features)
Weighted within-class covariance matrix. It corresponds to `sum_k prior_k * C_k` where `C_k` is the covariance matrix of the samples in class `k`. The `C_k` are estimated using the (potentially shrunk) biased estimator of covariance. If solver is ‘svd’, only exists when `store_covariance` is True.
**explained\_variance\_ratio\_**ndarray of shape (n\_components,)
Percentage of variance explained by each of the selected components. If `n_components` is not set then all components are stored and the sum of explained variances is equal to 1.0. Only available when eigen or svd solver is used.
**means\_**array-like of shape (n\_classes, n\_features)
Class-wise means.
**priors\_**array-like of shape (n\_classes,)
Class priors (sum to 1).
**scalings\_**array-like of shape (rank, n\_classes - 1)
Scaling of the features in the space spanned by the class centroids. Only available for ‘svd’ and ‘eigen’ solvers.
**xbar\_**array-like of shape (n\_features,)
Overall mean. Only present if solver is ‘svd’.
**classes\_**array-like of shape (n\_classes,)
Unique class labels.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`QuadraticDiscriminantAnalysis`](sklearn.discriminant_analysis.quadraticdiscriminantanalysis#sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis "sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis")
Quadratic Discriminant Analysis.
#### Examples
```
>>> import numpy as np
>>> from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> y = np.array([1, 1, 1, 2, 2, 2])
>>> clf = LinearDiscriminantAnalysis()
>>> clf.fit(X, y)
LinearDiscriminantAnalysis()
>>> print(clf.predict([[-0.8, -1]]))
[1]
```
#### Methods
| | |
| --- | --- |
| [`decision_function`](#sklearn.discriminant_analysis.LinearDiscriminantAnalysis.decision_function "sklearn.discriminant_analysis.LinearDiscriminantAnalysis.decision_function")(X) | Apply decision function to an array of samples. |
| [`fit`](#sklearn.discriminant_analysis.LinearDiscriminantAnalysis.fit "sklearn.discriminant_analysis.LinearDiscriminantAnalysis.fit")(X, y) | Fit the Linear Discriminant Analysis model. |
| [`fit_transform`](#sklearn.discriminant_analysis.LinearDiscriminantAnalysis.fit_transform "sklearn.discriminant_analysis.LinearDiscriminantAnalysis.fit_transform")(X[, y]) | Fit to data, then transform it. |
| [`get_feature_names_out`](#sklearn.discriminant_analysis.LinearDiscriminantAnalysis.get_feature_names_out "sklearn.discriminant_analysis.LinearDiscriminantAnalysis.get_feature_names_out")([input\_features]) | Get output feature names for transformation. |
| [`get_params`](#sklearn.discriminant_analysis.LinearDiscriminantAnalysis.get_params "sklearn.discriminant_analysis.LinearDiscriminantAnalysis.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.discriminant_analysis.LinearDiscriminantAnalysis.predict "sklearn.discriminant_analysis.LinearDiscriminantAnalysis.predict")(X) | Predict class labels for samples in X. |
| [`predict_log_proba`](#sklearn.discriminant_analysis.LinearDiscriminantAnalysis.predict_log_proba "sklearn.discriminant_analysis.LinearDiscriminantAnalysis.predict_log_proba")(X) | Estimate log probability. |
| [`predict_proba`](#sklearn.discriminant_analysis.LinearDiscriminantAnalysis.predict_proba "sklearn.discriminant_analysis.LinearDiscriminantAnalysis.predict_proba")(X) | Estimate probability. |
| [`score`](#sklearn.discriminant_analysis.LinearDiscriminantAnalysis.score "sklearn.discriminant_analysis.LinearDiscriminantAnalysis.score")(X, y[, sample\_weight]) | Return the mean accuracy on the given test data and labels. |
| [`set_params`](#sklearn.discriminant_analysis.LinearDiscriminantAnalysis.set_params "sklearn.discriminant_analysis.LinearDiscriminantAnalysis.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`transform`](#sklearn.discriminant_analysis.LinearDiscriminantAnalysis.transform "sklearn.discriminant_analysis.LinearDiscriminantAnalysis.transform")(X) | Project data to maximize class separation. |
decision\_function(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/discriminant_analysis.py#L694)
Apply decision function to an array of samples.
The decision function is equal (up to a constant factor) to the log-posterior of the model, i.e. `log p(y = k | x)`. In a binary classification setting this instead corresponds to the difference `log p(y = 1 | x) - log p(y = 0 | x)`. See [Mathematical formulation of the LDA and QDA classifiers](../lda_qda#lda-qda-math).
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Array of samples (test vectors).
Returns:
**C**ndarray of shape (n\_samples,) or (n\_samples, n\_classes)
Decision function values related to each class, per sample. In the two-class case, the shape is (n\_samples,), giving the log likelihood ratio of the positive class.
fit(*X*, *y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/discriminant_analysis.py#L528)
Fit the Linear Discriminant Analysis model.
Changed in version 0.19: *store\_covariance* has been moved to main constructor.
Changed in version 0.19: *tol* has been moved to main constructor.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training data.
**y**array-like of shape (n\_samples,)
Target values.
Returns:
**self**object
Fitted estimator.
fit\_transform(*X*, *y=None*, *\*\*fit\_params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L839)
Fit to data, then transform it.
Fits transformer to `X` and `y` with optional parameters `fit_params` and returns a transformed version of `X`.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs), default=None
Target values (None for unsupervised transformations).
**\*\*fit\_params**dict
Additional fit parameters.
Returns:
**X\_new**ndarray array of shape (n\_samples, n\_features\_new)
Transformed array.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L909)
Get output feature names for transformation.
Parameters:
**input\_features**array-like of str or None, default=None
Only used to validate feature names with the names seen in [`fit`](#sklearn.discriminant_analysis.LinearDiscriminantAnalysis.fit "sklearn.discriminant_analysis.LinearDiscriminantAnalysis.fit").
Returns:
**feature\_names\_out**ndarray of str objects
Transformed feature names.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L433)
Predict class labels for samples in X.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The data matrix for which we want to get the predictions.
Returns:
**y\_pred**ndarray of shape (n\_samples,)
Vector containing the class labels for each sample.
predict\_log\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/discriminant_analysis.py#L677)
Estimate log probability.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input data.
Returns:
**C**ndarray of shape (n\_samples, n\_classes)
Estimated log probabilities.
predict\_proba(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/discriminant_analysis.py#L655)
Estimate probability.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input data.
Returns:
**C**ndarray of shape (n\_samples, n\_classes)
Estimated probabilities.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L640)
Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True labels for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
Mean accuracy of `self.predict(X)` wrt. `y`.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/discriminant_analysis.py#L626)
Project data to maximize class separation.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Input data.
Returns:
**X\_new**ndarray of shape (n\_samples, n\_components) or (n\_samples, min(rank, n\_components))
Transformed data. In the case of the ‘svd’ solver, the shape is (n\_samples, min(rank, n\_components)).
Examples using `sklearn.discriminant_analysis.LinearDiscriminantAnalysis`
-------------------------------------------------------------------------
[Linear and Quadratic Discriminant Analysis with covariance ellipsoid](../../auto_examples/classification/plot_lda_qda#sphx-glr-auto-examples-classification-plot-lda-qda-py)
[Normal, Ledoit-Wolf and OAS Linear Discriminant Analysis for classification](../../auto_examples/classification/plot_lda#sphx-glr-auto-examples-classification-plot-lda-py)
[Comparison of LDA and PCA 2D projection of Iris dataset](../../auto_examples/decomposition/plot_pca_vs_lda#sphx-glr-auto-examples-decomposition-plot-pca-vs-lda-py)
[Manifold learning on handwritten digits: Locally Linear Embedding, Isomap…](../../auto_examples/manifold/plot_lle_digits#sphx-glr-auto-examples-manifold-plot-lle-digits-py)
[Dimensionality Reduction with Neighborhood Components Analysis](../../auto_examples/neighbors/plot_nca_dim_reduction#sphx-glr-auto-examples-neighbors-plot-nca-dim-reduction-py)
| programming_docs |
scikit_learn sklearn.random_projection.johnson_lindenstrauss_min_dim sklearn.random\_projection.johnson\_lindenstrauss\_min\_dim
===========================================================
sklearn.random\_projection.johnson\_lindenstrauss\_min\_dim(*n\_samples*, *\**, *eps=0.1*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/random_projection.py#L52)
Find a ‘safe’ number of components to randomly project to.
The distortion introduced by a random projection `p` only changes the distance between two points by a factor (1 +- eps) in an euclidean space with good probability. The projection `p` is an eps-embedding as defined by:
(1 - eps) ||u - v||^2 < ||p(u) - p(v)||^2 < (1 + eps) ||u - v||^2
Where u and v are any rows taken from a dataset of shape (n\_samples, n\_features), eps is in ]0, 1[ and p is a projection by a random Gaussian N(0, 1) matrix of shape (n\_components, n\_features) (or a sparse Achlioptas matrix).
The minimum number of components to guarantee the eps-embedding is given by:
n\_components >= 4 log(n\_samples) / (eps^2 / 2 - eps^3 / 3)
Note that the number of dimensions is independent of the original number of features but instead depends on the size of the dataset: the larger the dataset, the higher is the minimal dimensionality of an eps-embedding.
Read more in the [User Guide](../random_projection#johnson-lindenstrauss).
Parameters:
**n\_samples**int or array-like of int
Number of samples that should be a integer greater than 0. If an array is given, it will compute a safe number of components array-wise.
**eps**float or ndarray of shape (n\_components,), dtype=float, default=0.1
Maximum distortion rate in the range (0,1 ) as defined by the Johnson-Lindenstrauss lemma. If an array is given, it will compute a safe number of components array-wise.
Returns:
**n\_components**int or ndarray of int
The minimal number of components to guarantee with good probability an eps-embedding with n\_samples.
#### References
[1] <https://en.wikipedia.org/wiki/Johnson%E2%80%93Lindenstrauss_lemma>
[2] Sanjoy Dasgupta and Anupam Gupta, 1999, “An elementary proof of the Johnson-Lindenstrauss Lemma.” <http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.45.3654>
#### Examples
```
>>> from sklearn.random_projection import johnson_lindenstrauss_min_dim
>>> johnson_lindenstrauss_min_dim(1e6, eps=0.5)
663
```
```
>>> johnson_lindenstrauss_min_dim(1e6, eps=[0.5, 0.1, 0.01])
array([ 663, 11841, 1112658])
```
```
>>> johnson_lindenstrauss_min_dim([1e4, 1e5, 1e6], eps=0.1)
array([ 7894, 9868, 11841])
```
Examples using `sklearn.random_projection.johnson_lindenstrauss_min_dim`
------------------------------------------------------------------------
[The Johnson-Lindenstrauss bound for embedding with random projections](../../auto_examples/miscellaneous/plot_johnson_lindenstrauss_bound#sphx-glr-auto-examples-miscellaneous-plot-johnson-lindenstrauss-bound-py)
scikit_learn sklearn.datasets.make_hastie_10_2 sklearn.datasets.make\_hastie\_10\_2
====================================
sklearn.datasets.make\_hastie\_10\_2(*n\_samples=12000*, *\**, *random\_state=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/datasets/_samples_generator.py#L458)
Generate data for binary classification used in Hastie et al. 2009, Example 10.2.
The ten features are standard independent Gaussian and the target `y` is defined by:
```
y[i] = 1 if np.sum(X[i] ** 2) > 9.34 else -1
```
Read more in the [User Guide](https://scikit-learn.org/1.1/datasets/sample_generators.html#sample-generators).
Parameters:
**n\_samples**int, default=12000
The number of samples.
**random\_state**int, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
Returns:
**X**ndarray of shape (n\_samples, 10)
The input samples.
**y**ndarray of shape (n\_samples,)
The output values.
See also
[`make_gaussian_quantiles`](sklearn.datasets.make_gaussian_quantiles#sklearn.datasets.make_gaussian_quantiles "sklearn.datasets.make_gaussian_quantiles")
A generalization of this dataset approach.
#### References
[1] T. Hastie, R. Tibshirani and J. Friedman, “Elements of Statistical Learning Ed. 2”, Springer, 2009.
Examples using `sklearn.datasets.make_hastie_10_2`
--------------------------------------------------
[Discrete versus Real AdaBoost](../../auto_examples/ensemble/plot_adaboost_hastie_10_2#sphx-glr-auto-examples-ensemble-plot-adaboost-hastie-10-2-py)
[Early stopping of Gradient Boosting](../../auto_examples/ensemble/plot_gradient_boosting_early_stopping#sphx-glr-auto-examples-ensemble-plot-gradient-boosting-early-stopping-py)
[Gradient Boosting regularization](../../auto_examples/ensemble/plot_gradient_boosting_regularization#sphx-glr-auto-examples-ensemble-plot-gradient-boosting-regularization-py)
[Demonstration of multi-metric evaluation on cross\_val\_score and GridSearchCV](../../auto_examples/model_selection/plot_multi_metric_evaluation#sphx-glr-auto-examples-model-selection-plot-multi-metric-evaluation-py)
scikit_learn sklearn.datasets.load_sample_images sklearn.datasets.load\_sample\_images
=====================================
sklearn.datasets.load\_sample\_images()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/datasets/_base.py#L1360)
Load sample images for image manipulation.
Loads both, `china` and `flower`.
Read more in the [User Guide](https://scikit-learn.org/1.1/datasets/loading_other_datasets.html#sample-images).
Returns:
**data**[`Bunch`](sklearn.utils.bunch#sklearn.utils.Bunch "sklearn.utils.Bunch")
Dictionary-like object, with the following attributes.
imageslist of ndarray of shape (427, 640, 3)
The two sample image.
filenameslist
The filenames for the images.
DESCRstr
The full description of the dataset.
#### Examples
To load the data and visualize the images:
```
>>> from sklearn.datasets import load_sample_images
>>> dataset = load_sample_images()
>>> len(dataset.images)
2
>>> first_img_data = dataset.images[0]
>>> first_img_data.shape
(427, 640, 3)
>>> first_img_data.dtype
dtype('uint8')
```
scikit_learn sklearn.datasets.make_regression sklearn.datasets.make\_regression
=================================
sklearn.datasets.make\_regression(*n\_samples=100*, *n\_features=100*, *\**, *n\_informative=10*, *n\_targets=1*, *bias=0.0*, *effective\_rank=None*, *tail\_strength=0.5*, *noise=0.0*, *shuffle=True*, *coef=False*, *random\_state=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/datasets/_samples_generator.py#L505)
Generate a random regression problem.
The input set can either be well conditioned (by default) or have a low rank-fat tail singular profile. See [`make_low_rank_matrix`](sklearn.datasets.make_low_rank_matrix#sklearn.datasets.make_low_rank_matrix "sklearn.datasets.make_low_rank_matrix") for more details.
The output is generated by applying a (potentially biased) random linear regression model with `n_informative` nonzero regressors to the previously generated input and some gaussian centered noise with some adjustable scale.
Read more in the [User Guide](https://scikit-learn.org/1.1/datasets/sample_generators.html#sample-generators).
Parameters:
**n\_samples**int, default=100
The number of samples.
**n\_features**int, default=100
The number of features.
**n\_informative**int, default=10
The number of informative features, i.e., the number of features used to build the linear model used to generate the output.
**n\_targets**int, default=1
The number of regression targets, i.e., the dimension of the y output vector associated with a sample. By default, the output is a scalar.
**bias**float, default=0.0
The bias term in the underlying linear model.
**effective\_rank**int, default=None
If not None:
The approximate number of singular vectors required to explain most of the input data by linear combinations. Using this kind of singular spectrum in the input allows the generator to reproduce the correlations often observed in practice.
If None:
The input set is well conditioned, centered and gaussian with unit variance.
**tail\_strength**float, default=0.5
The relative importance of the fat noisy tail of the singular values profile if `effective_rank` is not None. When a float, it should be between 0 and 1.
**noise**float, default=0.0
The standard deviation of the gaussian noise applied to the output.
**shuffle**bool, default=True
Shuffle the samples and the features.
**coef**bool, default=False
If True, the coefficients of the underlying linear model are returned.
**random\_state**int, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
Returns:
**X**ndarray of shape (n\_samples, n\_features)
The input samples.
**y**ndarray of shape (n\_samples,) or (n\_samples, n\_targets)
The output values.
**coef**ndarray of shape (n\_features,) or (n\_features, n\_targets)
The coefficient of the underlying linear model. It is returned only if coef is True.
Examples using `sklearn.datasets.make_regression`
-------------------------------------------------
[Release Highlights for scikit-learn 0.23](../../auto_examples/release_highlights/plot_release_highlights_0_23_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-0-23-0-py)
[Prediction Latency](../../auto_examples/applications/plot_prediction_latency#sphx-glr-auto-examples-applications-plot-prediction-latency-py)
[Comparing Linear Bayesian Regressors](../../auto_examples/linear_model/plot_ard#sphx-glr-auto-examples-linear-model-plot-ard-py)
[Fitting an Elastic Net with a precomputed Gram Matrix and Weighted Samples](../../auto_examples/linear_model/plot_elastic_net_precomputed_gram_matrix_with_weighted_samples#sphx-glr-auto-examples-linear-model-plot-elastic-net-precomputed-gram-matrix-with-weighted-samples-py)
[HuberRegressor vs Ridge on dataset with strong outliers](../../auto_examples/linear_model/plot_huber_vs_ridge#sphx-glr-auto-examples-linear-model-plot-huber-vs-ridge-py)
[Lasso on dense and sparse data](../../auto_examples/linear_model/plot_lasso_dense_vs_sparse_data#sphx-glr-auto-examples-linear-model-plot-lasso-dense-vs-sparse-data-py)
[Plot Ridge coefficients as a function of the L2 regularization](../../auto_examples/linear_model/plot_ridge_coeffs#sphx-glr-auto-examples-linear-model-plot-ridge-coeffs-py)
[Robust linear model estimation using RANSAC](../../auto_examples/linear_model/plot_ransac#sphx-glr-auto-examples-linear-model-plot-ransac-py)
[Train error vs Test error](../../auto_examples/model_selection/plot_train_error_vs_test_error#sphx-glr-auto-examples-model-selection-plot-train-error-vs-test-error-py)
[Effect of transforming the targets in regression model](../../auto_examples/compose/plot_transformed_target#sphx-glr-auto-examples-compose-plot-transformed-target-py)
scikit_learn sklearn.metrics.pairwise_distances_chunked sklearn.metrics.pairwise\_distances\_chunked
============================================
sklearn.metrics.pairwise\_distances\_chunked(*X*, *Y=None*, *\**, *reduce\_func=None*, *metric='euclidean'*, *n\_jobs=None*, *working\_memory=None*, *\*\*kwds*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/pairwise.py#L1692)
Generate a distance matrix chunk by chunk with optional reduction.
In cases where not all of a pairwise distance matrix needs to be stored at once, this is used to calculate pairwise distances in `working_memory`-sized chunks. If `reduce_func` is given, it is run on each chunk and its return values are concatenated into lists, arrays or sparse matrices.
Parameters:
**X**ndarray of shape (n\_samples\_X, n\_samples\_X) or (n\_samples\_X, n\_features)
Array of pairwise distances between samples, or a feature array. The shape the array should be (n\_samples\_X, n\_samples\_X) if metric=’precomputed’ and (n\_samples\_X, n\_features) otherwise.
**Y**ndarray of shape (n\_samples\_Y, n\_features), default=None
An optional second feature array. Only allowed if metric != “precomputed”.
**reduce\_func**callable, default=None
The function which is applied on each chunk of the distance matrix, reducing it to needed values. `reduce_func(D_chunk, start)` is called repeatedly, where `D_chunk` is a contiguous vertical slice of the pairwise distance matrix, starting at row `start`. It should return one of: None; an array, a list, or a sparse matrix of length `D_chunk.shape[0]`; or a tuple of such objects. Returning None is useful for in-place operations, rather than reductions.
If None, pairwise\_distances\_chunked returns a generator of vertical chunks of the distance matrix.
**metric**str or callable, default=’euclidean’
The metric to use when calculating distance between instances in a feature array. If metric is a string, it must be one of the options allowed by scipy.spatial.distance.pdist for its metric parameter, or a metric listed in pairwise.PAIRWISE\_DISTANCE\_FUNCTIONS. If metric is “precomputed”, X is assumed to be a distance matrix. Alternatively, if metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays from X as input and return a value indicating the distance between them.
**n\_jobs**int, default=None
The number of jobs to use for the computation. This works by breaking down the pairwise matrix into n\_jobs even slices and computing them in parallel.
`None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
**working\_memory**int, default=None
The sought maximum memory for temporary distance matrix chunks. When None (default), the value of `sklearn.get_config()['working_memory']` is used.
**`\*\*kwds`**optional keyword parameters
Any further parameters are passed directly to the distance function. If using a scipy.spatial.distance metric, the parameters are still metric dependent. See the scipy docs for usage examples.
Yields:
**D\_chunk**{ndarray, sparse matrix}
A contiguous slice of distance matrix, optionally processed by `reduce_func`.
#### Examples
Without reduce\_func:
```
>>> import numpy as np
>>> from sklearn.metrics import pairwise_distances_chunked
>>> X = np.random.RandomState(0).rand(5, 3)
>>> D_chunk = next(pairwise_distances_chunked(X))
>>> D_chunk
array([[0. ..., 0.29..., 0.41..., 0.19..., 0.57...],
[0.29..., 0. ..., 0.57..., 0.41..., 0.76...],
[0.41..., 0.57..., 0. ..., 0.44..., 0.90...],
[0.19..., 0.41..., 0.44..., 0. ..., 0.51...],
[0.57..., 0.76..., 0.90..., 0.51..., 0. ...]])
```
Retrieve all neighbors and average distance within radius r:
```
>>> r = .2
>>> def reduce_func(D_chunk, start):
... neigh = [np.flatnonzero(d < r) for d in D_chunk]
... avg_dist = (D_chunk * (D_chunk < r)).mean(axis=1)
... return neigh, avg_dist
>>> gen = pairwise_distances_chunked(X, reduce_func=reduce_func)
>>> neigh, avg_dist = next(gen)
>>> neigh
[array([0, 3]), array([1]), array([2]), array([0, 3]), array([4])]
>>> avg_dist
array([0.039..., 0. , 0. , 0.039..., 0. ])
```
Where r is defined per sample, we need to make use of `start`:
```
>>> r = [.2, .4, .4, .3, .1]
>>> def reduce_func(D_chunk, start):
... neigh = [np.flatnonzero(d < r[i])
... for i, d in enumerate(D_chunk, start)]
... return neigh
>>> neigh = next(pairwise_distances_chunked(X, reduce_func=reduce_func))
>>> neigh
[array([0, 3]), array([0, 1]), array([2]), array([0, 3]), array([4])]
```
Force row-by-row generation by reducing `working_memory`:
```
>>> gen = pairwise_distances_chunked(X, reduce_func=reduce_func,
... working_memory=0)
>>> next(gen)
[array([0, 3])]
>>> next(gen)
[array([0, 1])]
```
scikit_learn sklearn.metrics.det_curve sklearn.metrics.det\_curve
==========================
sklearn.metrics.det\_curve(*y\_true*, *y\_score*, *pos\_label=None*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/_ranking.py#L239)
Compute error rates for different probability thresholds.
Note
This metric is used for evaluation of ranking and error tradeoffs of a binary classification task.
Read more in the [User Guide](../model_evaluation#det-curve).
New in version 0.24.
Parameters:
**y\_true**ndarray of shape (n\_samples,)
True binary labels. If labels are not either {-1, 1} or {0, 1}, then pos\_label should be explicitly given.
**y\_score**ndarray of shape of (n\_samples,)
Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by “decision\_function” on some classifiers).
**pos\_label**int or str, default=None
The label of the positive class. When `pos_label=None`, if `y_true` is in {-1, 1} or {0, 1}, `pos_label` is set to 1, otherwise an error will be raised.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**fpr**ndarray of shape (n\_thresholds,)
False positive rate (FPR) such that element i is the false positive rate of predictions with score >= thresholds[i]. This is occasionally referred to as false acceptance propability or fall-out.
**fnr**ndarray of shape (n\_thresholds,)
False negative rate (FNR) such that element i is the false negative rate of predictions with score >= thresholds[i]. This is occasionally referred to as false rejection or miss rate.
**thresholds**ndarray of shape (n\_thresholds,)
Decreasing score values.
See also
[`DetCurveDisplay.from_estimator`](sklearn.metrics.detcurvedisplay#sklearn.metrics.DetCurveDisplay.from_estimator "sklearn.metrics.DetCurveDisplay.from_estimator")
Plot DET curve given an estimator and some data.
[`DetCurveDisplay.from_predictions`](sklearn.metrics.detcurvedisplay#sklearn.metrics.DetCurveDisplay.from_predictions "sklearn.metrics.DetCurveDisplay.from_predictions")
Plot DET curve given the true and predicted labels.
[`DetCurveDisplay`](sklearn.metrics.detcurvedisplay#sklearn.metrics.DetCurveDisplay "sklearn.metrics.DetCurveDisplay")
DET curve visualization.
[`roc_curve`](sklearn.metrics.roc_curve#sklearn.metrics.roc_curve "sklearn.metrics.roc_curve")
Compute Receiver operating characteristic (ROC) curve.
[`precision_recall_curve`](sklearn.metrics.precision_recall_curve#sklearn.metrics.precision_recall_curve "sklearn.metrics.precision_recall_curve")
Compute precision-recall curve.
#### Examples
```
>>> import numpy as np
>>> from sklearn.metrics import det_curve
>>> y_true = np.array([0, 0, 1, 1])
>>> y_scores = np.array([0.1, 0.4, 0.35, 0.8])
>>> fpr, fnr, thresholds = det_curve(y_true, y_scores)
>>> fpr
array([0.5, 0.5, 0. ])
>>> fnr
array([0. , 0.5, 0.5])
>>> thresholds
array([0.35, 0.4 , 0.8 ])
```
Examples using `sklearn.metrics.det_curve`
------------------------------------------
[Detection error tradeoff (DET) curve](../../auto_examples/model_selection/plot_det#sphx-glr-auto-examples-model-selection-plot-det-py)
| programming_docs |
scikit_learn sklearn.compose.ColumnTransformer sklearn.compose.ColumnTransformer
=================================
*class*sklearn.compose.ColumnTransformer(*transformers*, *\**, *remainder='drop'*, *sparse\_threshold=0.3*, *n\_jobs=None*, *transformer\_weights=None*, *verbose=False*, *verbose\_feature\_names\_out=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/compose/_column_transformer.py#L39)
Applies transformers to columns of an array or pandas DataFrame.
This estimator allows different columns or column subsets of the input to be transformed separately and the features generated by each transformer will be concatenated to form a single feature space. This is useful for heterogeneous or columnar data, to combine several feature extraction mechanisms or transformations into a single transformer.
Read more in the [User Guide](../compose#column-transformer).
New in version 0.20.
Parameters:
**transformers**list of tuples
List of (name, transformer, columns) tuples specifying the transformer objects to be applied to subsets of the data.
namestr
Like in Pipeline and FeatureUnion, this allows the transformer and its parameters to be set using `set_params` and searched in grid search.
transformer{‘drop’, ‘passthrough’} or estimator
Estimator must support [fit](https://scikit-learn.org/1.1/glossary.html#term-fit) and [transform](https://scikit-learn.org/1.1/glossary.html#term-transform). Special-cased strings ‘drop’ and ‘passthrough’ are accepted as well, to indicate to drop the columns or to pass them through untransformed, respectively.
columnsstr, array-like of str, int, array-like of int, array-like of bool, slice or callable
Indexes the data on its second axis. Integers are interpreted as positional columns, while strings can reference DataFrame columns by name. A scalar string or int should be used where `transformer` expects X to be a 1d array-like (vector), otherwise a 2d array will be passed to the transformer. A callable is passed the input data `X` and can return any of the above. To select multiple columns by name or dtype, you can use [`make_column_selector`](sklearn.compose.make_column_selector#sklearn.compose.make_column_selector "sklearn.compose.make_column_selector").
**remainder**{‘drop’, ‘passthrough’} or estimator, default=’drop’
By default, only the specified columns in `transformers` are transformed and combined in the output, and the non-specified columns are dropped. (default of `'drop'`). By specifying `remainder='passthrough'`, all remaining columns that were not specified in `transformers` will be automatically passed through. This subset of columns is concatenated with the output of the transformers. By setting `remainder` to be an estimator, the remaining non-specified columns will use the `remainder` estimator. The estimator must support [fit](https://scikit-learn.org/1.1/glossary.html#term-fit) and [transform](https://scikit-learn.org/1.1/glossary.html#term-transform). Note that using this feature requires that the DataFrame columns input at [fit](https://scikit-learn.org/1.1/glossary.html#term-fit) and [transform](https://scikit-learn.org/1.1/glossary.html#term-transform) have identical order.
**sparse\_threshold**float, default=0.3
If the output of the different transformers contains sparse matrices, these will be stacked as a sparse matrix if the overall density is lower than this value. Use `sparse_threshold=0` to always return dense. When the transformed output consists of all dense data, the stacked result will be dense, and this keyword will be ignored.
**n\_jobs**int, default=None
Number of jobs to run in parallel. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
**transformer\_weights**dict, default=None
Multiplicative weights for features per transformer. The output of the transformer is multiplied by these weights. Keys are transformer names, values the weights.
**verbose**bool, default=False
If True, the time elapsed while fitting each transformer will be printed as it is completed.
**verbose\_feature\_names\_out**bool, default=True
If True, [`get_feature_names_out`](#sklearn.compose.ColumnTransformer.get_feature_names_out "sklearn.compose.ColumnTransformer.get_feature_names_out") will prefix all feature names with the name of the transformer that generated that feature. If False, [`get_feature_names_out`](#sklearn.compose.ColumnTransformer.get_feature_names_out "sklearn.compose.ColumnTransformer.get_feature_names_out") will not prefix any feature names and will error if feature names are not unique.
New in version 1.0.
Attributes:
**transformers\_**list
The collection of fitted transformers as tuples of (name, fitted\_transformer, column). `fitted_transformer` can be an estimator, ‘drop’, or ‘passthrough’. In case there were no columns selected, this will be the unfitted transformer. If there are remaining columns, the final element is a tuple of the form: (‘remainder’, transformer, remaining\_columns) corresponding to the `remainder` parameter. If there are remaining columns, then `len(transformers_)==len(transformers)+1`, otherwise `len(transformers_)==len(transformers)`.
[`named_transformers_`](#sklearn.compose.ColumnTransformer.named_transformers_ "sklearn.compose.ColumnTransformer.named_transformers_")[`Bunch`](sklearn.utils.bunch#sklearn.utils.Bunch "sklearn.utils.Bunch")
Access the fitted transformer by name.
**sparse\_output\_**bool
Boolean flag indicating whether the output of `transform` is a sparse matrix or a dense numpy array, which depends on the output of the individual transformers and the `sparse_threshold` keyword.
**output\_indices\_**dict
A dictionary from each transformer name to a slice, where the slice corresponds to indices in the transformed output. This is useful to inspect which transformer is responsible for which transformed feature(s).
New in version 1.0.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Only defined if the underlying transformers expose such an attribute when fit.
New in version 0.24.
See also
[`make_column_transformer`](sklearn.compose.make_column_transformer#sklearn.compose.make_column_transformer "sklearn.compose.make_column_transformer")
Convenience function for combining the outputs of multiple transformer objects applied to column subsets of the original feature space.
[`make_column_selector`](sklearn.compose.make_column_selector#sklearn.compose.make_column_selector "sklearn.compose.make_column_selector")
Convenience function for selecting columns based on datatype or the columns name with a regex pattern.
#### Notes
The order of the columns in the transformed feature matrix follows the order of how the columns are specified in the `transformers` list. Columns of the original feature matrix that are not specified are dropped from the resulting transformed feature matrix, unless specified in the `passthrough` keyword. Those columns specified with `passthrough` are added at the right to the output of the transformers.
#### Examples
```
>>> import numpy as np
>>> from sklearn.compose import ColumnTransformer
>>> from sklearn.preprocessing import Normalizer
>>> ct = ColumnTransformer(
... [("norm1", Normalizer(norm='l1'), [0, 1]),
... ("norm2", Normalizer(norm='l1'), slice(2, 4))])
>>> X = np.array([[0., 1., 2., 2.],
... [1., 1., 0., 1.]])
>>> # Normalizer scales each row of X to unit norm. A separate scaling
>>> # is applied for the two first and two last elements of each
>>> # row independently.
>>> ct.fit_transform(X)
array([[0. , 1. , 0.5, 0.5],
[0.5, 0.5, 0. , 1. ]])
```
[`ColumnTransformer`](#sklearn.compose.ColumnTransformer "sklearn.compose.ColumnTransformer") can be configured with a transformer that requires a 1d array by setting the column to a string:
```
>>> from sklearn.feature_extraction import FeatureHasher
>>> from sklearn.preprocessing import MinMaxScaler
>>> import pandas as pd
>>> X = pd.DataFrame({
... "documents": ["First item", "second one here", "Is this the last?"],
... "width": [3, 4, 5],
... })
>>> # "documents" is a string which configures ColumnTransformer to
>>> # pass the documents column as a 1d array to the FeatureHasher
>>> ct = ColumnTransformer(
... [("text_preprocess", FeatureHasher(input_type="string"), "documents"),
... ("num_preprocess", MinMaxScaler(), ["width"])])
>>> X_trans = ct.fit_transform(X)
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.compose.ColumnTransformer.fit "sklearn.compose.ColumnTransformer.fit")(X[, y]) | Fit all transformers using X. |
| [`fit_transform`](#sklearn.compose.ColumnTransformer.fit_transform "sklearn.compose.ColumnTransformer.fit_transform")(X[, y]) | Fit all transformers, transform the data and concatenate results. |
| [`get_feature_names`](#sklearn.compose.ColumnTransformer.get_feature_names "sklearn.compose.ColumnTransformer.get_feature_names")() | DEPRECATED: get\_feature\_names is deprecated in 1.0 and will be removed in 1.2. |
| [`get_feature_names_out`](#sklearn.compose.ColumnTransformer.get_feature_names_out "sklearn.compose.ColumnTransformer.get_feature_names_out")([input\_features]) | Get output feature names for transformation. |
| [`get_params`](#sklearn.compose.ColumnTransformer.get_params "sklearn.compose.ColumnTransformer.get_params")([deep]) | Get parameters for this estimator. |
| [`set_params`](#sklearn.compose.ColumnTransformer.set_params "sklearn.compose.ColumnTransformer.set_params")(\*\*kwargs) | Set the parameters of this estimator. |
| [`transform`](#sklearn.compose.ColumnTransformer.transform "sklearn.compose.ColumnTransformer.transform")(X) | Transform X separately by each transformer, concatenate results. |
fit(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/compose/_column_transformer.py#L638)
Fit all transformers using X.
Parameters:
**X**{array-like, dataframe} of shape (n\_samples, n\_features)
Input data, of which specified subsets are used to fit the transformers.
**y**array-like of shape (n\_samples,…), default=None
Targets for supervised learning.
Returns:
**self**ColumnTransformer
This estimator.
fit\_transform(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/compose/_column_transformer.py#L660)
Fit all transformers, transform the data and concatenate results.
Parameters:
**X**{array-like, dataframe} of shape (n\_samples, n\_features)
Input data, of which specified subsets are used to fit the transformers.
**y**array-like of shape (n\_samples,), default=None
Targets for supervised learning.
Returns:
**X\_t**{array-like, sparse matrix} of shape (n\_samples, sum\_n\_components)
Horizontally stacked results of transformers. sum\_n\_components is the sum of n\_components (output dimension) over transformers. If any result is a sparse matrix, everything will be converted to sparse matrices.
get\_feature\_names()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/compose/_column_transformer.py#L411)
DEPRECATED: get\_feature\_names is deprecated in 1.0 and will be removed in 1.2. Please use get\_feature\_names\_out instead.
Get feature names from all transformers.
Returns:
**feature\_names**list of strings
Names of the features produced by transform.
get\_feature\_names\_out(*input\_features=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/compose/_column_transformer.py#L470)
Get output feature names for transformation.
Parameters:
**input\_features**array-like of str or None, default=None
Input features.
* If `input_features` is `None`, then `feature_names_in_` is used as feature names in. If `feature_names_in_` is not defined, then the following input feature names are generated: `["x0", "x1", ..., "x(n_features_in_ - 1)"]`.
* If `input_features` is an array-like, then `input_features` must match `feature_names_in_` if `feature_names_in_` is defined.
Returns:
**feature\_names\_out**ndarray of str objects
Transformed feature names.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/compose/_column_transformer.py#L256)
Get parameters for this estimator.
Returns the parameters given in the constructor as well as the estimators contained within the `transformers` of the `ColumnTransformer`.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
*property*named\_transformers\_
Access the fitted transformer by name.
Read-only attribute to access any transformer by given name. Keys are transformer names and values are the fitted transformer objects.
set\_params(*\*\*kwargs*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/compose/_column_transformer.py#L276)
Set the parameters of this estimator.
Valid parameter keys can be listed with `get_params()`. Note that you can directly set the parameters of the estimators contained in `transformers` of `ColumnTransformer`.
Parameters:
**\*\*kwargs**dict
Estimator parameters.
Returns:
**self**ColumnTransformer
This estimator.
transform(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/compose/_column_transformer.py#L716)
Transform X separately by each transformer, concatenate results.
Parameters:
**X**{array-like, dataframe} of shape (n\_samples, n\_features)
The data to be transformed by subset.
Returns:
**X\_t**{array-like, sparse matrix} of shape (n\_samples, sum\_n\_components)
Horizontally stacked results of transformers. sum\_n\_components is the sum of n\_components (output dimension) over transformers. If any result is a sparse matrix, everything will be converted to sparse matrices.
Examples using `sklearn.compose.ColumnTransformer`
--------------------------------------------------
[Release Highlights for scikit-learn 1.1](../../auto_examples/release_highlights/plot_release_highlights_1_1_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-1-1-0-py)
[Release Highlights for scikit-learn 1.0](../../auto_examples/release_highlights/plot_release_highlights_1_0_0#sphx-glr-auto-examples-release-highlights-plot-release-highlights-1-0-0-py)
[Time-related feature engineering](../../auto_examples/applications/plot_cyclical_feature_engineering#sphx-glr-auto-examples-applications-plot-cyclical-feature-engineering-py)
[Poisson regression and non-normal loss](../../auto_examples/linear_model/plot_poisson_regression_non_normal_loss#sphx-glr-auto-examples-linear-model-plot-poisson-regression-non-normal-loss-py)
[Tweedie regression on insurance claims](../../auto_examples/linear_model/plot_tweedie_regression_insurance_claims#sphx-glr-auto-examples-linear-model-plot-tweedie-regression-insurance-claims-py)
[Permutation Importance vs Random Forest Feature Importance (MDI)](../../auto_examples/inspection/plot_permutation_importance#sphx-glr-auto-examples-inspection-plot-permutation-importance-py)
[Displaying Pipelines](../../auto_examples/miscellaneous/plot_pipeline_display#sphx-glr-auto-examples-miscellaneous-plot-pipeline-display-py)
[Column Transformer with Heterogeneous Data Sources](../../auto_examples/compose/plot_column_transformer#sphx-glr-auto-examples-compose-plot-column-transformer-py)
[Column Transformer with Mixed Types](../../auto_examples/compose/plot_column_transformer_mixed_types#sphx-glr-auto-examples-compose-plot-column-transformer-mixed-types-py)
scikit_learn sklearn.covariance.EllipticEnvelope sklearn.covariance.EllipticEnvelope
===================================
*class*sklearn.covariance.EllipticEnvelope(*\**, *store\_precision=True*, *assume\_centered=False*, *support\_fraction=None*, *contamination=0.1*, *random\_state=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_elliptic_envelope.py#L12)
An object for detecting outliers in a Gaussian distributed dataset.
Read more in the [User Guide](../outlier_detection#outlier-detection).
Parameters:
**store\_precision**bool, default=True
Specify if the estimated precision is stored.
**assume\_centered**bool, default=False
If True, the support of robust location and covariance estimates is computed, and a covariance estimate is recomputed from it, without centering the data. Useful to work with data whose mean is significantly equal to zero but is not exactly zero. If False, the robust location and covariance are directly computed with the FastMCD algorithm without additional treatment.
**support\_fraction**float, default=None
The proportion of points to be included in the support of the raw MCD estimate. If None, the minimum value of support\_fraction will be used within the algorithm: `[n_sample + n_features + 1] / 2`. Range is (0, 1).
**contamination**float, default=0.1
The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Range is (0, 0.5].
**random\_state**int, RandomState instance or None, default=None
Determines the pseudo random number generator for shuffling the data. Pass an int for reproducible results across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
Attributes:
**location\_**ndarray of shape (n\_features,)
Estimated robust location.
**covariance\_**ndarray of shape (n\_features, n\_features)
Estimated robust covariance matrix.
**precision\_**ndarray of shape (n\_features, n\_features)
Estimated pseudo inverse matrix. (stored only if store\_precision is True)
**support\_**ndarray of shape (n\_samples,)
A mask of the observations that have been used to compute the robust estimates of location and shape.
**offset\_**float
Offset used to define the decision function from the raw scores. We have the relation: `decision_function = score_samples - offset_`. The offset depends on the contamination parameter and is defined in such a way we obtain the expected number of outliers (samples with decision function < 0) in training.
New in version 0.20.
**raw\_location\_**ndarray of shape (n\_features,)
The raw robust estimated location before correction and re-weighting.
**raw\_covariance\_**ndarray of shape (n\_features, n\_features)
The raw robust estimated covariance before correction and re-weighting.
**raw\_support\_**ndarray of shape (n\_samples,)
A mask of the observations that have been used to compute the raw robust estimates of location and shape, before correction and re-weighting.
**dist\_**ndarray of shape (n\_samples,)
Mahalanobis distances of the training set (on which [`fit`](#sklearn.covariance.EllipticEnvelope.fit "sklearn.covariance.EllipticEnvelope.fit") is called) observations.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`EmpiricalCovariance`](sklearn.covariance.empiricalcovariance#sklearn.covariance.EmpiricalCovariance "sklearn.covariance.EmpiricalCovariance")
Maximum likelihood covariance estimator.
[`GraphicalLasso`](sklearn.covariance.graphicallasso#sklearn.covariance.GraphicalLasso "sklearn.covariance.GraphicalLasso")
Sparse inverse covariance estimation with an l1-penalized estimator.
[`LedoitWolf`](sklearn.covariance.ledoitwolf#sklearn.covariance.LedoitWolf "sklearn.covariance.LedoitWolf")
LedoitWolf Estimator.
[`MinCovDet`](sklearn.covariance.mincovdet#sklearn.covariance.MinCovDet "sklearn.covariance.MinCovDet")
Minimum Covariance Determinant (robust estimator of covariance).
[`OAS`](sklearn.covariance.oas#sklearn.covariance.OAS "sklearn.covariance.OAS")
Oracle Approximating Shrinkage Estimator.
[`ShrunkCovariance`](sklearn.covariance.shrunkcovariance#sklearn.covariance.ShrunkCovariance "sklearn.covariance.ShrunkCovariance")
Covariance estimator with shrinkage.
#### Notes
Outlier detection from covariance estimation may break or not perform well in high-dimensional settings. In particular, one will always take care to work with `n_samples > n_features ** 2`.
#### References
[1] Rousseeuw, P.J., Van Driessen, K. “A fast algorithm for the minimum covariance determinant estimator” Technometrics 41(3), 212 (1999)
#### Examples
```
>>> import numpy as np
>>> from sklearn.covariance import EllipticEnvelope
>>> true_cov = np.array([[.8, .3],
... [.3, .4]])
>>> X = np.random.RandomState(0).multivariate_normal(mean=[0, 0],
... cov=true_cov,
... size=500)
>>> cov = EllipticEnvelope(random_state=0).fit(X)
>>> # predict returns 1 for an inlier and -1 for an outlier
>>> cov.predict([[0, 0],
... [3, 3]])
array([ 1, -1])
>>> cov.covariance_
array([[0.7411..., 0.2535...],
[0.2535..., 0.3053...]])
>>> cov.location_
array([0.0813... , 0.0427...])
```
#### Methods
| | |
| --- | --- |
| [`correct_covariance`](#sklearn.covariance.EllipticEnvelope.correct_covariance "sklearn.covariance.EllipticEnvelope.correct_covariance")(data) | Apply a correction to raw Minimum Covariance Determinant estimates. |
| [`decision_function`](#sklearn.covariance.EllipticEnvelope.decision_function "sklearn.covariance.EllipticEnvelope.decision_function")(X) | Compute the decision function of the given observations. |
| [`error_norm`](#sklearn.covariance.EllipticEnvelope.error_norm "sklearn.covariance.EllipticEnvelope.error_norm")(comp\_cov[, norm, scaling, squared]) | Compute the Mean Squared Error between two covariance estimators. |
| [`fit`](#sklearn.covariance.EllipticEnvelope.fit "sklearn.covariance.EllipticEnvelope.fit")(X[, y]) | Fit the EllipticEnvelope model. |
| [`fit_predict`](#sklearn.covariance.EllipticEnvelope.fit_predict "sklearn.covariance.EllipticEnvelope.fit_predict")(X[, y]) | Perform fit on X and returns labels for X. |
| [`get_params`](#sklearn.covariance.EllipticEnvelope.get_params "sklearn.covariance.EllipticEnvelope.get_params")([deep]) | Get parameters for this estimator. |
| [`get_precision`](#sklearn.covariance.EllipticEnvelope.get_precision "sklearn.covariance.EllipticEnvelope.get_precision")() | Getter for the precision matrix. |
| [`mahalanobis`](#sklearn.covariance.EllipticEnvelope.mahalanobis "sklearn.covariance.EllipticEnvelope.mahalanobis")(X) | Compute the squared Mahalanobis distances of given observations. |
| [`predict`](#sklearn.covariance.EllipticEnvelope.predict "sklearn.covariance.EllipticEnvelope.predict")(X) | Predict labels (1 inlier, -1 outlier) of X according to fitted model. |
| [`reweight_covariance`](#sklearn.covariance.EllipticEnvelope.reweight_covariance "sklearn.covariance.EllipticEnvelope.reweight_covariance")(data) | Re-weight raw Minimum Covariance Determinant estimates. |
| [`score`](#sklearn.covariance.EllipticEnvelope.score "sklearn.covariance.EllipticEnvelope.score")(X, y[, sample\_weight]) | Return the mean accuracy on the given test data and labels. |
| [`score_samples`](#sklearn.covariance.EllipticEnvelope.score_samples "sklearn.covariance.EllipticEnvelope.score_samples")(X) | Compute the negative Mahalanobis distances. |
| [`set_params`](#sklearn.covariance.EllipticEnvelope.set_params "sklearn.covariance.EllipticEnvelope.set_params")(\*\*params) | Set the parameters of this estimator. |
correct\_covariance(*data*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_robust_covariance.py#L769)
Apply a correction to raw Minimum Covariance Determinant estimates.
Correction using the empirical correction factor suggested by Rousseeuw and Van Driessen in [[RVD]](#rbb2ba44703ed-rvd).
Parameters:
**data**array-like of shape (n\_samples, n\_features)
The data matrix, with p features and n samples. The data set must be the one which was used to compute the raw estimates.
Returns:
**covariance\_corrected**ndarray of shape (n\_features, n\_features)
Corrected robust covariance estimate.
#### References
[[RVD](#id2)] A Fast Algorithm for the Minimum Covariance Determinant Estimator, 1999, American Statistical Association and the American Society for Quality, TECHNOMETRICS
decision\_function(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_elliptic_envelope.py#L184)
Compute the decision function of the given observations.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The data matrix.
Returns:
**decision**ndarray of shape (n\_samples,)
Decision function of the samples. It is equal to the shifted Mahalanobis distances. The threshold for being an outlier is 0, which ensures a compatibility with other outlier detection algorithms.
error\_norm(*comp\_cov*, *norm='frobenius'*, *scaling=True*, *squared=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_empirical_covariance.py#L267)
Compute the Mean Squared Error between two covariance estimators.
Parameters:
**comp\_cov**array-like of shape (n\_features, n\_features)
The covariance to compare with.
**norm**{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error `(comp_cov - self.covariance_)`.
**scaling**bool, default=True
If True (default), the squared error norm is divided by n\_features. If False, the squared error norm is not rescaled.
**squared**bool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned.
Returns:
**result**float
The Mean Squared Error (in the sense of the Frobenius norm) between `self` and `comp_cov` covariance estimators.
fit(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_elliptic_envelope.py#L158)
Fit the EllipticEnvelope model.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training data.
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**self**object
Returns the instance itself.
fit\_predict(*X*, *y=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L956)
Perform fit on X and returns labels for X.
Returns -1 for outliers and 1 for inliers.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
The input samples.
**y**Ignored
Not used, present for API consistency by convention.
Returns:
**y**ndarray of shape (n\_samples,)
1 for inliers, -1 for outliers.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
get\_precision()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_empirical_covariance.py#L195)
Getter for the precision matrix.
Returns:
**precision\_**array-like of shape (n\_features, n\_features)
The precision matrix associated to the current covariance object.
mahalanobis(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_empirical_covariance.py#L318)
Compute the squared Mahalanobis distances of given observations.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit.
Returns:
**dist**ndarray of shape (n\_samples,)
Squared Mahalanobis distances of the observations.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_elliptic_envelope.py#L220)
Predict labels (1 inlier, -1 outlier) of X according to fitted model.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The data matrix.
Returns:
**is\_inlier**ndarray of shape (n\_samples,)
Returns -1 for anomalies/outliers and +1 for inliers.
reweight\_covariance(*data*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_robust_covariance.py#L809)
Re-weight raw Minimum Covariance Determinant estimates.
Re-weight observations using Rousseeuw’s method (equivalent to deleting outlying observations from the data set before computing location and covariance estimates) described in [[RVDriessen]](#rd2c89e63f1c9-rvdriessen).
Parameters:
**data**array-like of shape (n\_samples, n\_features)
The data matrix, with p features and n samples. The data set must be the one which was used to compute the raw estimates.
Returns:
**location\_reweighted**ndarray of shape (n\_features,)
Re-weighted robust location estimate.
**covariance\_reweighted**ndarray of shape (n\_features, n\_features)
Re-weighted robust covariance estimate.
**support\_reweighted**ndarray of shape (n\_samples,), dtype=bool
A mask of the observations that have been used to compute the re-weighted robust location and covariance estimates.
#### References
[[RVDriessen](#id4)] A Fast Algorithm for the Minimum Covariance Determinant Estimator, 1999, American Statistical Association and the American Society for Quality, TECHNOMETRICS
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_elliptic_envelope.py#L240)
Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True labels for X.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
Mean accuracy of self.predict(X) w.r.t. y.
score\_samples(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/covariance/_elliptic_envelope.py#L204)
Compute the negative Mahalanobis distances.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
The data matrix.
Returns:
**negative\_mahal\_distances**array-like of shape (n\_samples,)
Opposite of the Mahalanobis distances.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
Examples using `sklearn.covariance.EllipticEnvelope`
----------------------------------------------------
[Outlier detection on a real data set](../../auto_examples/applications/plot_outlier_detection_wine#sphx-glr-auto-examples-applications-plot-outlier-detection-wine-py)
[Comparing anomaly detection algorithms for outlier detection on toy datasets](../../auto_examples/miscellaneous/plot_anomaly_comparison#sphx-glr-auto-examples-miscellaneous-plot-anomaly-comparison-py)
| programming_docs |
scikit_learn sklearn.gaussian_process.kernels.Matern sklearn.gaussian\_process.kernels.Matern
========================================
*class*sklearn.gaussian\_process.kernels.Matern(*length\_scale=1.0*, *length\_scale\_bounds=(1e-05, 100000.0)*, *nu=1.5*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L1576)
Matern kernel.
The class of Matern kernels is a generalization of the [`RBF`](sklearn.gaussian_process.kernels.rbf#sklearn.gaussian_process.kernels.RBF "sklearn.gaussian_process.kernels.RBF"). It has an additional parameter \(\nu\) which controls the smoothness of the resulting function. The smaller \(\nu\), the less smooth the approximated function is. As \(\nu\rightarrow\infty\), the kernel becomes equivalent to the [`RBF`](sklearn.gaussian_process.kernels.rbf#sklearn.gaussian_process.kernels.RBF "sklearn.gaussian_process.kernels.RBF") kernel. When \(\nu = 1/2\), the Matérn kernel becomes identical to the absolute exponential kernel. Important intermediate values are \(\nu=1.5\) (once differentiable functions) and \(\nu=2.5\) (twice differentiable functions).
The kernel is given by:
\[k(x\_i, x\_j) = \frac{1}{\Gamma(\nu)2^{\nu-1}}\Bigg( \frac{\sqrt{2\nu}}{l} d(x\_i , x\_j ) \Bigg)^\nu K\_\nu\Bigg( \frac{\sqrt{2\nu}}{l} d(x\_i , x\_j )\Bigg)\] where \(d(\cdot,\cdot)\) is the Euclidean distance, \(K\_{\nu}(\cdot)\) is a modified Bessel function and \(\Gamma(\cdot)\) is the gamma function. See [[1]](#rc15b4675c755-1), Chapter 4, Section 4.2, for details regarding the different variants of the Matern kernel.
Read more in the [User Guide](../gaussian_process#gp-kernels).
New in version 0.18.
Parameters:
**length\_scale**float or ndarray of shape (n\_features,), default=1.0
The length scale of the kernel. If a float, an isotropic kernel is used. If an array, an anisotropic kernel is used where each dimension of l defines the length-scale of the respective feature dimension.
**length\_scale\_bounds**pair of floats >= 0 or “fixed”, default=(1e-5, 1e5)
The lower and upper bound on ‘length\_scale’. If set to “fixed”, ‘length\_scale’ cannot be changed during hyperparameter tuning.
**nu**float, default=1.5
The parameter nu controlling the smoothness of the learned function. The smaller nu, the less smooth the approximated function is. For nu=inf, the kernel becomes equivalent to the RBF kernel and for nu=0.5 to the absolute exponential kernel. Important intermediate values are nu=1.5 (once differentiable functions) and nu=2.5 (twice differentiable functions). Note that values of nu not in [0.5, 1.5, 2.5, inf] incur a considerably higher computational cost (appr. 10 times higher) since they require to evaluate the modified Bessel function. Furthermore, in contrast to l, nu is kept fixed to its initial value and not optimized.
Attributes:
**anisotropic**
[`bounds`](#sklearn.gaussian_process.kernels.Matern.bounds "sklearn.gaussian_process.kernels.Matern.bounds")
Returns the log-transformed bounds on the theta.
**hyperparameter\_length\_scale**
[`hyperparameters`](#sklearn.gaussian_process.kernels.Matern.hyperparameters "sklearn.gaussian_process.kernels.Matern.hyperparameters")
Returns a list of all hyperparameter specifications.
[`n_dims`](#sklearn.gaussian_process.kernels.Matern.n_dims "sklearn.gaussian_process.kernels.Matern.n_dims")
Returns the number of non-fixed hyperparameters of the kernel.
[`requires_vector_input`](#sklearn.gaussian_process.kernels.Matern.requires_vector_input "sklearn.gaussian_process.kernels.Matern.requires_vector_input")
Returns whether the kernel is defined on fixed-length feature vectors or generic objects.
[`theta`](#sklearn.gaussian_process.kernels.Matern.theta "sklearn.gaussian_process.kernels.Matern.theta")
Returns the (flattened, log-transformed) non-fixed hyperparameters.
#### References
[[1](#id1)] [Carl Edward Rasmussen, Christopher K. I. Williams (2006). “Gaussian Processes for Machine Learning”. The MIT Press.](http://www.gaussianprocess.org/gpml/)
#### Examples
```
>>> from sklearn.datasets import load_iris
>>> from sklearn.gaussian_process import GaussianProcessClassifier
>>> from sklearn.gaussian_process.kernels import Matern
>>> X, y = load_iris(return_X_y=True)
>>> kernel = 1.0 * Matern(length_scale=1.0, nu=1.5)
>>> gpc = GaussianProcessClassifier(kernel=kernel,
... random_state=0).fit(X, y)
>>> gpc.score(X, y)
0.9866...
>>> gpc.predict_proba(X[:2,:])
array([[0.8513..., 0.0368..., 0.1117...],
[0.8086..., 0.0693..., 0.1220...]])
```
#### Methods
| | |
| --- | --- |
| [`__call__`](#sklearn.gaussian_process.kernels.Matern.__call__ "sklearn.gaussian_process.kernels.Matern.__call__")(X[, Y, eval\_gradient]) | Return the kernel k(X, Y) and optionally its gradient. |
| [`clone_with_theta`](#sklearn.gaussian_process.kernels.Matern.clone_with_theta "sklearn.gaussian_process.kernels.Matern.clone_with_theta")(theta) | Returns a clone of self with given hyperparameters theta. |
| [`diag`](#sklearn.gaussian_process.kernels.Matern.diag "sklearn.gaussian_process.kernels.Matern.diag")(X) | Returns the diagonal of the kernel k(X, X). |
| [`get_params`](#sklearn.gaussian_process.kernels.Matern.get_params "sklearn.gaussian_process.kernels.Matern.get_params")([deep]) | Get parameters of this kernel. |
| [`is_stationary`](#sklearn.gaussian_process.kernels.Matern.is_stationary "sklearn.gaussian_process.kernels.Matern.is_stationary")() | Returns whether the kernel is stationary. |
| [`set_params`](#sklearn.gaussian_process.kernels.Matern.set_params "sklearn.gaussian_process.kernels.Matern.set_params")(\*\*params) | Set the parameters of this kernel. |
\_\_call\_\_(*X*, *Y=None*, *eval\_gradient=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L1660)
Return the kernel k(X, Y) and optionally its gradient.
Parameters:
**X**ndarray of shape (n\_samples\_X, n\_features)
Left argument of the returned kernel k(X, Y)
**Y**ndarray of shape (n\_samples\_Y, n\_features), default=None
Right argument of the returned kernel k(X, Y). If None, k(X, X) if evaluated instead.
**eval\_gradient**bool, default=False
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Only supported when Y is None.
Returns:
**K**ndarray of shape (n\_samples\_X, n\_samples\_Y)
Kernel k(X, Y)
**K\_gradient**ndarray of shape (n\_samples\_X, n\_samples\_X, n\_dims), optional
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when `eval_gradient` is True.
*property*bounds
Returns the log-transformed bounds on the theta.
Returns:
**bounds**ndarray of shape (n\_dims, 2)
The log-transformed bounds on the kernel’s hyperparameters theta
clone\_with\_theta(*theta*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L238)
Returns a clone of self with given hyperparameters theta.
Parameters:
**theta**ndarray of shape (n\_dims,)
The hyperparameters
diag(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L448)
Returns the diagonal of the kernel k(X, X).
The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated.
Parameters:
**X**ndarray of shape (n\_samples\_X, n\_features)
Left argument of the returned kernel k(X, Y)
Returns:
**K\_diag**ndarray of shape (n\_samples\_X,)
Diagonal of kernel k(X, X)
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L158)
Get parameters of this kernel.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
*property*hyperparameters
Returns a list of all hyperparameter specifications.
is\_stationary()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L474)
Returns whether the kernel is stationary.
*property*n\_dims
Returns the number of non-fixed hyperparameters of the kernel.
*property*requires\_vector\_input
Returns whether the kernel is defined on fixed-length feature vectors or generic objects. Defaults to True for backward compatibility.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L198)
Set the parameters of this kernel.
The method works on simple kernels as well as on nested kernels. The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Returns:
self
*property*theta
Returns the (flattened, log-transformed) non-fixed hyperparameters.
Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale.
Returns:
**theta**ndarray of shape (n\_dims,)
The non-fixed, log-transformed hyperparameters of the kernel
Examples using `sklearn.gaussian_process.kernels.Matern`
--------------------------------------------------------
[Illustration of prior and posterior Gaussian process for different kernels](../../auto_examples/gaussian_process/plot_gpr_prior_posterior#sphx-glr-auto-examples-gaussian-process-plot-gpr-prior-posterior-py)
scikit_learn sklearn.exceptions.ConvergenceWarning sklearn.exceptions.ConvergenceWarning
=====================================
*class*sklearn.exceptions.ConvergenceWarning[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/exceptions.py#L41)
Custom warning to capture convergence problems
Changed in version 0.18: Moved from sklearn.utils.
Attributes:
**args**
#### Methods
| | |
| --- | --- |
| [`with_traceback`](#sklearn.exceptions.ConvergenceWarning.with_traceback "sklearn.exceptions.ConvergenceWarning.with_traceback") | Exception.with\_traceback(tb) -- set self.\_\_traceback\_\_ to tb and return self. |
with\_traceback()
Exception.with\_traceback(tb) – set self.\_\_traceback\_\_ to tb and return self.
Examples using `sklearn.exceptions.ConvergenceWarning`
------------------------------------------------------
[Early stopping of Stochastic Gradient Descent](../../auto_examples/linear_model/plot_sgd_early_stopping#sphx-glr-auto-examples-linear-model-plot-sgd-early-stopping-py)
[Multiclass sparse logistic regression on 20newgroups](../../auto_examples/linear_model/plot_sparse_logistic_regression_20newsgroups#sphx-glr-auto-examples-linear-model-plot-sparse-logistic-regression-20newsgroups-py)
[Compare Stochastic learning strategies for MLPClassifier](../../auto_examples/neural_networks/plot_mlp_training_curves#sphx-glr-auto-examples-neural-networks-plot-mlp-training-curves-py)
[Visualization of MLP weights on MNIST](../../auto_examples/neural_networks/plot_mnist_filters#sphx-glr-auto-examples-neural-networks-plot-mnist-filters-py)
[Feature discretization](../../auto_examples/preprocessing/plot_discretization_classification#sphx-glr-auto-examples-preprocessing-plot-discretization-classification-py)
scikit_learn sklearn.linear_model.Lars sklearn.linear\_model.Lars
==========================
*class*sklearn.linear\_model.Lars(*\**, *fit\_intercept=True*, *verbose=False*, *normalize='deprecated'*, *precompute='auto'*, *n\_nonzero\_coefs=500*, *eps=2.220446049250313e-16*, *copy\_X=True*, *fit\_path=True*, *jitter=None*, *random\_state=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_least_angle.py#L847)
Least Angle Regression model a.k.a. LAR.
Read more in the [User Guide](../linear_model#least-angle-regression).
Parameters:
**fit\_intercept**bool, default=True
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
**verbose**bool or int, default=False
Sets the verbosity amount.
**normalize**bool, default=True
This parameter is ignored when `fit_intercept` is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use [`StandardScaler`](sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler "sklearn.preprocessing.StandardScaler") before calling `fit` on an estimator with `normalize=False`.
Deprecated since version 1.0: `normalize` was deprecated in version 1.0. It will default to False in 1.2 and be removed in 1.4.
**precompute**bool, ‘auto’ or array-like , default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to `'auto'` let us decide. The Gram matrix can also be passed as argument.
**n\_nonzero\_coefs**int, default=500
Target number of non-zero coefficients. Use `np.inf` for no limit.
**eps**float, default=np.finfo(float).eps
The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the `tol` parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.
**copy\_X**bool, default=True
If `True`, X will be copied; else, it may be overwritten.
**fit\_path**bool, default=True
If True the full path is stored in the `coef_path_` attribute. If you compute the solution for a large problem or many targets, setting `fit_path` to `False` will lead to a speedup, especially with a small alpha.
**jitter**float, default=None
Upper bound on a uniform noise parameter to be added to the `y` values, to satisfy the model’s assumption of one-at-a-time computations. Might help with stability.
New in version 0.23.
**random\_state**int, RandomState instance or None, default=None
Determines random number generation for jittering. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state). Ignored if `jitter` is None.
New in version 0.23.
Attributes:
**alphas\_**array-like of shape (n\_alphas + 1,) or list of such arrays
Maximum of covariances (in absolute value) at each iteration. `n_alphas` is either `max_iter`, `n_features` or the number of nodes in the path with `alpha >= alpha_min`, whichever is smaller. If this is a list of array-like, the length of the outer list is `n_targets`.
**active\_**list of shape (n\_alphas,) or list of such lists
Indices of active variables at the end of the path. If this is a list of list, the length of the outer list is `n_targets`.
**coef\_path\_**array-like of shape (n\_features, n\_alphas + 1) or list of such arrays
The varying values of the coefficients along the path. It is not present if the `fit_path` parameter is `False`. If this is a list of array-like, the length of the outer list is `n_targets`.
**coef\_**array-like of shape (n\_features,) or (n\_targets, n\_features)
Parameter vector (w in the formulation formula).
**intercept\_**float or array-like of shape (n\_targets,)
Independent term in decision function.
**n\_iter\_**array-like or int
The number of iterations taken by lars\_path to find the grid of alphas for each target.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
See also
[`lars_path`](sklearn.linear_model.lars_path#sklearn.linear_model.lars_path "sklearn.linear_model.lars_path")
Compute Least Angle Regression or Lasso path using LARS algorithm.
[`LarsCV`](sklearn.linear_model.larscv#sklearn.linear_model.LarsCV "sklearn.linear_model.LarsCV")
Cross-validated Least Angle Regression model.
[`sklearn.decomposition.sparse_encode`](sklearn.decomposition.sparse_encode#sklearn.decomposition.sparse_encode "sklearn.decomposition.sparse_encode")
Sparse coding.
#### Examples
```
>>> from sklearn import linear_model
>>> reg = linear_model.Lars(n_nonzero_coefs=1, normalize=False)
>>> reg.fit([[-1, 1], [0, 0], [1, 1]], [-1.1111, 0, -1.1111])
Lars(n_nonzero_coefs=1, normalize=False)
>>> print(reg.coef_)
[ 0. -1.11...]
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.linear_model.Lars.fit "sklearn.linear_model.Lars.fit")(X, y[, Xy]) | Fit the model using X, y as training data. |
| [`get_params`](#sklearn.linear_model.Lars.get_params "sklearn.linear_model.Lars.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.linear_model.Lars.predict "sklearn.linear_model.Lars.predict")(X) | Predict using the linear model. |
| [`score`](#sklearn.linear_model.Lars.score "sklearn.linear_model.Lars.score")(X, y[, sample\_weight]) | Return the coefficient of determination of the prediction. |
| [`set_params`](#sklearn.linear_model.Lars.set_params "sklearn.linear_model.Lars.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y*, *Xy=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_least_angle.py#L1088)
Fit the model using X, y as training data.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Training data.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_targets)
Target values.
**Xy**array-like of shape (n\_samples,) or (n\_samples, n\_targets), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
Returns:
**self**object
Returns an instance of self.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_base.py#L372)
Predict using the linear model.
Parameters:
**X**array-like or sparse matrix, shape (n\_samples, n\_features)
Samples.
Returns:
**C**array, shape (n\_samples,)
Returns predicted values.
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L677)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares `((y_true - y_pred)** 2).sum()` and \(v\) is the total sum of squares `((y_true - y_true.mean()) ** 2).sum()`. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of `y`, disregarding the input features, would get a \(R^2\) score of 0.0.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape `(n_samples, n_samples_fitted)`, where `n_samples_fitted` is the number of samples used in the fitting for the estimator.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True values for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
\(R^2\) of `self.predict(X)` wrt. `y`.
#### Notes
The \(R^2\) score used when calling `score` on a regressor uses `multioutput='uniform_average'` from version 0.23 to keep consistent with default value of [`r2_score`](sklearn.metrics.r2_score#sklearn.metrics.r2_score "sklearn.metrics.r2_score"). This influences the `score` method of all the multioutput regressors (except for [`MultiOutputRegressor`](sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor")).
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
| programming_docs |
scikit_learn sklearn.decomposition.dict_learning sklearn.decomposition.dict\_learning
====================================
sklearn.decomposition.dict\_learning(*X*, *n\_components*, *\**, *alpha*, *max\_iter=100*, *tol=1e-08*, *method='lars'*, *n\_jobs=None*, *dict\_init=None*, *code\_init=None*, *callback=None*, *verbose=False*, *random\_state=None*, *return\_n\_iter=False*, *positive\_dict=False*, *positive\_code=False*, *method\_max\_iter=1000*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/decomposition/_dict_learning.py#L495)
Solves a dictionary learning matrix factorization problem.
Finds the best dictionary and the corresponding sparse code for approximating the data matrix X by solving:
```
(U^*, V^*) = argmin 0.5 || X - U V ||_Fro^2 + alpha * || U ||_1,1
(U,V)
with || V_k ||_2 = 1 for all 0 <= k < n_components
```
where V is the dictionary and U is the sparse code. ||.||\_Fro stands for the Frobenius norm and ||.||\_1,1 stands for the entry-wise matrix norm which is the sum of the absolute values of all the entries in the matrix.
Read more in the [User Guide](../decomposition#dictionarylearning).
Parameters:
**X**ndarray of shape (n\_samples, n\_features)
Data matrix.
**n\_components**int
Number of dictionary atoms to extract.
**alpha**int
Sparsity controlling parameter.
**max\_iter**int, default=100
Maximum number of iterations to perform.
**tol**float, default=1e-8
Tolerance for the stopping condition.
**method**{‘lars’, ‘cd’}, default=’lars’
The method used:
* `'lars'`: uses the least angle regression method to solve the lasso
problem (`linear_model.lars_path`);
* `'cd'`: uses the coordinate descent method to compute the Lasso solution (`linear_model.Lasso`). Lars will be faster if the estimated components are sparse.
**n\_jobs**int, default=None
Number of parallel jobs to run. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
**dict\_init**ndarray of shape (n\_components, n\_features), default=None
Initial value for the dictionary for warm restart scenarios. Only used if `code_init` and `dict_init` are not None.
**code\_init**ndarray of shape (n\_samples, n\_components), default=None
Initial value for the sparse code for warm restart scenarios. Only used if `code_init` and `dict_init` are not None.
**callback**callable, default=None
Callable that gets invoked every five iterations
**verbose**bool, default=False
To control the verbosity of the procedure.
**random\_state**int, RandomState instance or None, default=None
Used for randomly initializing the dictionary. Pass an int for reproducible results across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
**return\_n\_iter**bool, default=False
Whether or not to return the number of iterations.
**positive\_dict**bool, default=False
Whether to enforce positivity when finding the dictionary.
New in version 0.20.
**positive\_code**bool, default=False
Whether to enforce positivity when finding the code.
New in version 0.20.
**method\_max\_iter**int, default=1000
Maximum number of iterations to perform.
New in version 0.22.
Returns:
**code**ndarray of shape (n\_samples, n\_components)
The sparse code factor in the matrix factorization.
**dictionary**ndarray of shape (n\_components, n\_features),
The dictionary factor in the matrix factorization.
**errors**array
Vector of errors at each iteration.
**n\_iter**int
Number of iterations run. Returned only if `return_n_iter` is set to True.
See also
[`dict_learning_online`](sklearn.decomposition.dict_learning_online#sklearn.decomposition.dict_learning_online "sklearn.decomposition.dict_learning_online")
[`DictionaryLearning`](sklearn.decomposition.dictionarylearning#sklearn.decomposition.DictionaryLearning "sklearn.decomposition.DictionaryLearning")
[`MiniBatchDictionaryLearning`](sklearn.decomposition.minibatchdictionarylearning#sklearn.decomposition.MiniBatchDictionaryLearning "sklearn.decomposition.MiniBatchDictionaryLearning")
[`SparsePCA`](sklearn.decomposition.sparsepca#sklearn.decomposition.SparsePCA "sklearn.decomposition.SparsePCA")
[`MiniBatchSparsePCA`](sklearn.decomposition.minibatchsparsepca#sklearn.decomposition.MiniBatchSparsePCA "sklearn.decomposition.MiniBatchSparsePCA")
scikit_learn sklearn.gaussian_process.kernels.ExpSineSquared sklearn.gaussian\_process.kernels.ExpSineSquared
================================================
*class*sklearn.gaussian\_process.kernels.ExpSineSquared(*length\_scale=1.0*, *periodicity=1.0*, *length\_scale\_bounds=(1e-05, 100000.0)*, *periodicity\_bounds=(1e-05, 100000.0)*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L1929)
Exp-Sine-Squared kernel (aka periodic kernel).
The ExpSineSquared kernel allows one to model functions which repeat themselves exactly. It is parameterized by a length scale parameter \(l>0\) and a periodicity parameter \(p>0\). Only the isotropic variant where \(l\) is a scalar is supported at the moment. The kernel is given by:
\[k(x\_i, x\_j) = \text{exp}\left(- \frac{ 2\sin^2(\pi d(x\_i, x\_j)/p) }{ l^ 2} \right)\] where \(l\) is the length scale of the kernel, \(p\) the periodicity of the kernel and \(d(\\cdot,\\cdot)\) is the Euclidean distance.
Read more in the [User Guide](../gaussian_process#gp-kernels).
New in version 0.18.
Parameters:
**length\_scale**float > 0, default=1.0
The length scale of the kernel.
**periodicity**float > 0, default=1.0
The periodicity of the kernel.
**length\_scale\_bounds**pair of floats >= 0 or “fixed”, default=(1e-5, 1e5)
The lower and upper bound on ‘length\_scale’. If set to “fixed”, ‘length\_scale’ cannot be changed during hyperparameter tuning.
**periodicity\_bounds**pair of floats >= 0 or “fixed”, default=(1e-5, 1e5)
The lower and upper bound on ‘periodicity’. If set to “fixed”, ‘periodicity’ cannot be changed during hyperparameter tuning.
Attributes:
[`bounds`](#sklearn.gaussian_process.kernels.ExpSineSquared.bounds "sklearn.gaussian_process.kernels.ExpSineSquared.bounds")
Returns the log-transformed bounds on the theta.
[`hyperparameter_length_scale`](#sklearn.gaussian_process.kernels.ExpSineSquared.hyperparameter_length_scale "sklearn.gaussian_process.kernels.ExpSineSquared.hyperparameter_length_scale")
Returns the length scale
**hyperparameter\_periodicity**
[`hyperparameters`](#sklearn.gaussian_process.kernels.ExpSineSquared.hyperparameters "sklearn.gaussian_process.kernels.ExpSineSquared.hyperparameters")
Returns a list of all hyperparameter specifications.
[`n_dims`](#sklearn.gaussian_process.kernels.ExpSineSquared.n_dims "sklearn.gaussian_process.kernels.ExpSineSquared.n_dims")
Returns the number of non-fixed hyperparameters of the kernel.
[`requires_vector_input`](#sklearn.gaussian_process.kernels.ExpSineSquared.requires_vector_input "sklearn.gaussian_process.kernels.ExpSineSquared.requires_vector_input")
Returns whether the kernel is defined on fixed-length feature vectors or generic objects.
[`theta`](#sklearn.gaussian_process.kernels.ExpSineSquared.theta "sklearn.gaussian_process.kernels.ExpSineSquared.theta")
Returns the (flattened, log-transformed) non-fixed hyperparameters.
#### Examples
```
>>> from sklearn.datasets import make_friedman2
>>> from sklearn.gaussian_process import GaussianProcessRegressor
>>> from sklearn.gaussian_process.kernels import ExpSineSquared
>>> X, y = make_friedman2(n_samples=50, noise=0, random_state=0)
>>> kernel = ExpSineSquared(length_scale=1, periodicity=1)
>>> gpr = GaussianProcessRegressor(kernel=kernel, alpha=5,
... random_state=0).fit(X, y)
>>> gpr.score(X, y)
0.0144...
>>> gpr.predict(X[:2,:], return_std=True)
(array([425.6..., 457.5...]), array([0.3894..., 0.3467...]))
```
#### Methods
| | |
| --- | --- |
| [`__call__`](#sklearn.gaussian_process.kernels.ExpSineSquared.__call__ "sklearn.gaussian_process.kernels.ExpSineSquared.__call__")(X[, Y, eval\_gradient]) | Return the kernel k(X, Y) and optionally its gradient. |
| [`clone_with_theta`](#sklearn.gaussian_process.kernels.ExpSineSquared.clone_with_theta "sklearn.gaussian_process.kernels.ExpSineSquared.clone_with_theta")(theta) | Returns a clone of self with given hyperparameters theta. |
| [`diag`](#sklearn.gaussian_process.kernels.ExpSineSquared.diag "sklearn.gaussian_process.kernels.ExpSineSquared.diag")(X) | Returns the diagonal of the kernel k(X, X). |
| [`get_params`](#sklearn.gaussian_process.kernels.ExpSineSquared.get_params "sklearn.gaussian_process.kernels.ExpSineSquared.get_params")([deep]) | Get parameters of this kernel. |
| [`is_stationary`](#sklearn.gaussian_process.kernels.ExpSineSquared.is_stationary "sklearn.gaussian_process.kernels.ExpSineSquared.is_stationary")() | Returns whether the kernel is stationary. |
| [`set_params`](#sklearn.gaussian_process.kernels.ExpSineSquared.set_params "sklearn.gaussian_process.kernels.ExpSineSquared.set_params")(\*\*params) | Set the parameters of this kernel. |
\_\_call\_\_(*X*, *Y=None*, *eval\_gradient=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L2005)
Return the kernel k(X, Y) and optionally its gradient.
Parameters:
**X**ndarray of shape (n\_samples\_X, n\_features)
Left argument of the returned kernel k(X, Y)
**Y**ndarray of shape (n\_samples\_Y, n\_features), default=None
Right argument of the returned kernel k(X, Y). If None, k(X, X) if evaluated instead.
**eval\_gradient**bool, default=False
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Only supported when Y is None.
Returns:
**K**ndarray of shape (n\_samples\_X, n\_samples\_Y)
Kernel k(X, Y)
**K\_gradient**ndarray of shape (n\_samples\_X, n\_samples\_X, n\_dims), optional
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when `eval_gradient` is True.
*property*bounds
Returns the log-transformed bounds on the theta.
Returns:
**bounds**ndarray of shape (n\_dims, 2)
The log-transformed bounds on the kernel’s hyperparameters theta
clone\_with\_theta(*theta*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L238)
Returns a clone of self with given hyperparameters theta.
Parameters:
**theta**ndarray of shape (n\_dims,)
The hyperparameters
diag(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L448)
Returns the diagonal of the kernel k(X, X).
The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated.
Parameters:
**X**ndarray of shape (n\_samples\_X, n\_features)
Left argument of the returned kernel k(X, Y)
Returns:
**K\_diag**ndarray of shape (n\_samples\_X,)
Diagonal of kernel k(X, X)
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L158)
Get parameters of this kernel.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
*property*hyperparameter\_length\_scale
Returns the length scale
*property*hyperparameters
Returns a list of all hyperparameter specifications.
is\_stationary()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L474)
Returns whether the kernel is stationary.
*property*n\_dims
Returns the number of non-fixed hyperparameters of the kernel.
*property*requires\_vector\_input
Returns whether the kernel is defined on fixed-length feature vectors or generic objects. Defaults to True for backward compatibility.
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/gaussian_process/kernels.py#L198)
Set the parameters of this kernel.
The method works on simple kernels as well as on nested kernels. The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Returns:
self
*property*theta
Returns the (flattened, log-transformed) non-fixed hyperparameters.
Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale.
Returns:
**theta**ndarray of shape (n\_dims,)
The non-fixed, log-transformed hyperparameters of the kernel
Examples using `sklearn.gaussian_process.kernels.ExpSineSquared`
----------------------------------------------------------------
[Comparison of kernel ridge and Gaussian process regression](../../auto_examples/gaussian_process/plot_compare_gpr_krr#sphx-glr-auto-examples-gaussian-process-plot-compare-gpr-krr-py)
[Gaussian process regression (GPR) on Mauna Loa CO2 data](../../auto_examples/gaussian_process/plot_gpr_co2#sphx-glr-auto-examples-gaussian-process-plot-gpr-co2-py)
[Illustration of prior and posterior Gaussian process for different kernels](../../auto_examples/gaussian_process/plot_gpr_prior_posterior#sphx-glr-auto-examples-gaussian-process-plot-gpr-prior-posterior-py)
scikit_learn sklearn.neighbors.RadiusNeighborsRegressor sklearn.neighbors.RadiusNeighborsRegressor
==========================================
*class*sklearn.neighbors.RadiusNeighborsRegressor(*radius=1.0*, *\**, *weights='uniform'*, *algorithm='auto'*, *leaf\_size=30*, *p=2*, *metric='minkowski'*, *metric\_params=None*, *n\_jobs=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_regression.py#L256)
Regression based on neighbors within a fixed radius.
The target is predicted by local interpolation of the targets associated of the nearest neighbors in the training set.
Read more in the [User Guide](../neighbors#regression).
New in version 0.9.
Parameters:
**radius**float, default=1.0
Range of parameter space to use by default for [`radius_neighbors`](#sklearn.neighbors.RadiusNeighborsRegressor.radius_neighbors "sklearn.neighbors.RadiusNeighborsRegressor.radius_neighbors") queries.
**weights**{‘uniform’, ‘distance’} or callable, default=’uniform’
Weight function used in prediction. Possible values:
* ‘uniform’ : uniform weights. All points in each neighborhood are weighted equally.
* ‘distance’ : weight points by the inverse of their distance. in this case, closer neighbors of a query point will have a greater influence than neighbors which are further away.
* [callable] : a user-defined function which accepts an array of distances, and returns an array of the same shape containing the weights.
Uniform weights are used by default.
**algorithm**{‘auto’, ‘ball\_tree’, ‘kd\_tree’, ‘brute’}, default=’auto’
Algorithm used to compute the nearest neighbors:
* ‘ball\_tree’ will use [`BallTree`](sklearn.neighbors.balltree#sklearn.neighbors.BallTree "sklearn.neighbors.BallTree")
* ‘kd\_tree’ will use [`KDTree`](sklearn.neighbors.kdtree#sklearn.neighbors.KDTree "sklearn.neighbors.KDTree")
* ‘brute’ will use a brute-force search.
* ‘auto’ will attempt to decide the most appropriate algorithm based on the values passed to [`fit`](#sklearn.neighbors.RadiusNeighborsRegressor.fit "sklearn.neighbors.RadiusNeighborsRegressor.fit") method.
Note: fitting on sparse input will override the setting of this parameter, using brute force.
**leaf\_size**int, default=30
Leaf size passed to BallTree or KDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem.
**p**int, default=2
Power parameter for the Minkowski metric. When p = 1, this is equivalent to using manhattan\_distance (l1), and euclidean\_distance (l2) for p = 2. For arbitrary p, minkowski\_distance (l\_p) is used.
**metric**str or callable, default=’minkowski’
Metric to use for distance computation. Default is “minkowski”, which results in the standard Euclidean distance when p = 2. See the documentation of [scipy.spatial.distance](https://docs.scipy.org/doc/scipy/reference/spatial.distance.html) and the metrics listed in [`distance_metrics`](sklearn.metrics.pairwise.distance_metrics#sklearn.metrics.pairwise.distance_metrics "sklearn.metrics.pairwise.distance_metrics") for valid metric values.
If metric is “precomputed”, X is assumed to be a distance matrix and must be square during fit. X may be a [sparse graph](https://scikit-learn.org/1.1/glossary.html#term-sparse-graph), in which case only “nonzero” elements may be considered neighbors.
If metric is a callable function, it takes two arrays representing 1D vectors as inputs and must return one value indicating the distance between those vectors. This works for Scipy’s metrics, but is less efficient than passing the metric name as a string.
**metric\_params**dict, default=None
Additional keyword arguments for the metric function.
**n\_jobs**int, default=None
The number of parallel jobs to run for neighbors search. `None` means 1 unless in a [`joblib.parallel_backend`](https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend "(in joblib v1.3.0.dev0)") context. `-1` means using all processors. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-n_jobs) for more details.
Attributes:
**effective\_metric\_**str or callable
The distance metric to use. It will be same as the `metric` parameter or a synonym of it, e.g. ‘euclidean’ if the `metric` parameter set to ‘minkowski’ and `p` parameter set to 2.
**effective\_metric\_params\_**dict
Additional keyword arguments for the metric function. For most metrics will be same with `metric_params` parameter, but may also contain the `p` parameter value if the `effective_metric_` attribute is set to ‘minkowski’.
**n\_features\_in\_**int
Number of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit).
New in version 0.24.
**feature\_names\_in\_**ndarray of shape (`n_features_in_`,)
Names of features seen during [fit](https://scikit-learn.org/1.1/glossary.html#term-fit). Defined only when `X` has feature names that are all strings.
New in version 1.0.
**n\_samples\_fit\_**int
Number of samples in the fitted data.
See also
[`NearestNeighbors`](sklearn.neighbors.nearestneighbors#sklearn.neighbors.NearestNeighbors "sklearn.neighbors.NearestNeighbors")
Regression based on nearest neighbors.
[`KNeighborsRegressor`](sklearn.neighbors.kneighborsregressor#sklearn.neighbors.KNeighborsRegressor "sklearn.neighbors.KNeighborsRegressor")
Regression based on k-nearest neighbors.
[`KNeighborsClassifier`](sklearn.neighbors.kneighborsclassifier#sklearn.neighbors.KNeighborsClassifier "sklearn.neighbors.KNeighborsClassifier")
Classifier based on the k-nearest neighbors.
[`RadiusNeighborsClassifier`](sklearn.neighbors.radiusneighborsclassifier#sklearn.neighbors.RadiusNeighborsClassifier "sklearn.neighbors.RadiusNeighborsClassifier")
Classifier based on neighbors within a given radius.
#### Notes
See [Nearest Neighbors](../neighbors#neighbors) in the online documentation for a discussion of the choice of `algorithm` and `leaf_size`.
<https://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm>
#### Examples
```
>>> X = [[0], [1], [2], [3]]
>>> y = [0, 0, 1, 1]
>>> from sklearn.neighbors import RadiusNeighborsRegressor
>>> neigh = RadiusNeighborsRegressor(radius=1.0)
>>> neigh.fit(X, y)
RadiusNeighborsRegressor(...)
>>> print(neigh.predict([[1.5]]))
[0.5]
```
#### Methods
| | |
| --- | --- |
| [`fit`](#sklearn.neighbors.RadiusNeighborsRegressor.fit "sklearn.neighbors.RadiusNeighborsRegressor.fit")(X, y) | Fit the radius neighbors regressor from the training dataset. |
| [`get_params`](#sklearn.neighbors.RadiusNeighborsRegressor.get_params "sklearn.neighbors.RadiusNeighborsRegressor.get_params")([deep]) | Get parameters for this estimator. |
| [`predict`](#sklearn.neighbors.RadiusNeighborsRegressor.predict "sklearn.neighbors.RadiusNeighborsRegressor.predict")(X) | Predict the target for the provided data. |
| [`radius_neighbors`](#sklearn.neighbors.RadiusNeighborsRegressor.radius_neighbors "sklearn.neighbors.RadiusNeighborsRegressor.radius_neighbors")([X, radius, ...]) | Find the neighbors within a given radius of a point or points. |
| [`radius_neighbors_graph`](#sklearn.neighbors.RadiusNeighborsRegressor.radius_neighbors_graph "sklearn.neighbors.RadiusNeighborsRegressor.radius_neighbors_graph")([X, radius, mode, ...]) | Compute the (weighted) graph of Neighbors for points in X. |
| [`score`](#sklearn.neighbors.RadiusNeighborsRegressor.score "sklearn.neighbors.RadiusNeighborsRegressor.score")(X, y[, sample\_weight]) | Return the coefficient of determination of the prediction. |
| [`set_params`](#sklearn.neighbors.RadiusNeighborsRegressor.set_params "sklearn.neighbors.RadiusNeighborsRegressor.set_params")(\*\*params) | Set the parameters of this estimator. |
fit(*X*, *y*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_regression.py#L412)
Fit the radius neighbors regressor from the training dataset.
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features) or (n\_samples, n\_samples) if metric=’precomputed’
Training data.
**y**{array-like, sparse matrix} of shape (n\_samples,) or (n\_samples, n\_outputs)
Target values.
Returns:
**self**RadiusNeighborsRegressor
The fitted radius neighbors regressor.
get\_params(*deep=True*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L194)
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
predict(*X*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_regression.py#L434)
Predict the target for the provided data.
Parameters:
**X**array-like of shape (n\_queries, n\_features), or (n\_queries, n\_indexed) if metric == ‘precomputed’
Test samples.
Returns:
**y**ndarray of shape (n\_queries,) or (n\_queries, n\_outputs), dtype=double
Target values.
radius\_neighbors(*X=None*, *radius=None*, *return\_distance=True*, *sort\_results=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_base.py#L996)
Find the neighbors within a given radius of a point or points.
Return the indices and distances of each point from the dataset lying in a ball with size `radius` around the points of the query array. Points lying on the boundary are included in the results.
The result points are *not* necessarily sorted by distance to their query point.
Parameters:
**X**array-like of (n\_samples, n\_features), default=None
The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor.
**radius**float, default=None
Limiting distance of neighbors to return. The default is the value passed to the constructor.
**return\_distance**bool, default=True
Whether or not to return the distances.
**sort\_results**bool, default=False
If True, the distances and indices will be sorted by increasing distances before being returned. If False, the results may not be sorted. If `return_distance=False`, setting `sort_results=True` will result in an error.
New in version 0.22.
Returns:
**neigh\_dist**ndarray of shape (n\_samples,) of arrays
Array representing the distances to each point, only present if `return_distance=True`. The distance values are computed according to the `metric` constructor parameter.
**neigh\_ind**ndarray of shape (n\_samples,) of arrays
An array of arrays of indices of the approximate nearest points from the population matrix that lie within a ball of size `radius` around the query points.
#### Notes
Because the number of neighbors of each point is not necessarily equal, the results for multiple query points cannot be fit in a standard data array. For efficiency, `radius_neighbors` returns arrays of objects, where each object is a 1D array of indices or distances.
#### Examples
In the following example, we construct a NeighborsClassifier class from an array representing our data set and ask who’s the closest point to [1, 1, 1]:
```
>>> import numpy as np
>>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(radius=1.6)
>>> neigh.fit(samples)
NearestNeighbors(radius=1.6)
>>> rng = neigh.radius_neighbors([[1., 1., 1.]])
>>> print(np.asarray(rng[0][0]))
[1.5 0.5]
>>> print(np.asarray(rng[1][0]))
[1 2]
```
The first array returned contains the distances to all points which are closer than 1.6, while the second array returned contains their indices. In general, multiple points can be queried at the same time.
radius\_neighbors\_graph(*X=None*, *radius=None*, *mode='connectivity'*, *sort\_results=False*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/neighbors/_base.py#L1205)
Compute the (weighted) graph of Neighbors for points in X.
Neighborhoods are restricted the points at a distance lower than radius.
Parameters:
**X**array-like of shape (n\_samples, n\_features), default=None
The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor.
**radius**float, default=None
Radius of neighborhoods. The default is the value passed to the constructor.
**mode**{‘connectivity’, ‘distance’}, default=’connectivity’
Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are distances between points, type of distance depends on the selected metric parameter in NearestNeighbors class.
**sort\_results**bool, default=False
If True, in each row of the result, the non-zero entries will be sorted by increasing distances. If False, the non-zero entries may not be sorted. Only used with mode=’distance’.
New in version 0.22.
Returns:
**A**sparse-matrix of shape (n\_queries, n\_samples\_fit)
`n_samples_fit` is the number of samples in the fitted data. `A[i, j]` gives the weight of the edge connecting `i` to `j`. The matrix is of CSR format.
See also
[`kneighbors_graph`](sklearn.neighbors.kneighbors_graph#sklearn.neighbors.kneighbors_graph "sklearn.neighbors.kneighbors_graph")
Compute the (weighted) graph of k-Neighbors for points in X.
#### Examples
```
>>> X = [[0], [3], [1]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(radius=1.5)
>>> neigh.fit(X)
NearestNeighbors(radius=1.5)
>>> A = neigh.radius_neighbors_graph(X)
>>> A.toarray()
array([[1., 0., 1.],
[0., 1., 0.],
[1., 0., 1.]])
```
score(*X*, *y*, *sample\_weight=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L677)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares `((y_true - y_pred)** 2).sum()` and \(v\) is the total sum of squares `((y_true - y_true.mean()) ** 2).sum()`. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of `y`, disregarding the input features, would get a \(R^2\) score of 0.0.
Parameters:
**X**array-like of shape (n\_samples, n\_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape `(n_samples, n_samples_fitted)`, where `n_samples_fitted` is the number of samples used in the fitting for the estimator.
**y**array-like of shape (n\_samples,) or (n\_samples, n\_outputs)
True values for `X`.
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
Returns:
**score**float
\(R^2\) of `self.predict(X)` wrt. `y`.
#### Notes
The \(R^2\) score used when calling `score` on a regressor uses `multioutput='uniform_average'` from version 0.23 to keep consistent with default value of [`r2_score`](sklearn.metrics.r2_score#sklearn.metrics.r2_score "sklearn.metrics.r2_score"). This influences the `score` method of all the multioutput regressors (except for [`MultiOutputRegressor`](sklearn.multioutput.multioutputregressor#sklearn.multioutput.MultiOutputRegressor "sklearn.multioutput.MultiOutputRegressor")).
set\_params(*\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/base.py#L218)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as [`Pipeline`](sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline "sklearn.pipeline.Pipeline")). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
| programming_docs |
scikit_learn sklearn.get_config sklearn.get\_config
===================
sklearn.get\_config()[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/_config.py#L28)
Retrieve current values for configuration set by [`set_config`](sklearn.set_config#sklearn.set_config "sklearn.set_config").
Returns:
**config**dict
Keys are parameter names that can be passed to [`set_config`](sklearn.set_config#sklearn.set_config "sklearn.set_config").
See also
[`config_context`](sklearn.config_context#sklearn.config_context "sklearn.config_context")
Context manager for global scikit-learn configuration.
[`set_config`](sklearn.set_config#sklearn.set_config "sklearn.set_config")
Set global scikit-learn configuration.
scikit_learn sklearn.feature_extraction.image.reconstruct_from_patches_2d sklearn.feature\_extraction.image.reconstruct\_from\_patches\_2d
================================================================
sklearn.feature\_extraction.image.reconstruct\_from\_patches\_2d(*patches*, *image\_size*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/feature_extraction/image.py#L420)
Reconstruct the image from all of its patches.
Patches are assumed to overlap and the image is constructed by filling in the patches from left to right, top to bottom, averaging the overlapping regions.
Read more in the [User Guide](../feature_extraction#image-feature-extraction).
Parameters:
**patches**ndarray of shape (n\_patches, patch\_height, patch\_width) or (n\_patches, patch\_height, patch\_width, n\_channels)
The complete set of patches. If the patches contain colour information, channels are indexed along the last dimension: RGB patches would have `n_channels=3`.
**image\_size**tuple of int (image\_height, image\_width) or (image\_height, image\_width, n\_channels)
The size of the image that will be reconstructed.
Returns:
**image**ndarray of shape image\_size
The reconstructed image.
Examples using `sklearn.feature_extraction.image.reconstruct_from_patches_2d`
-----------------------------------------------------------------------------
[Image denoising using dictionary learning](../../auto_examples/decomposition/plot_image_denoising#sphx-glr-auto-examples-decomposition-plot-image-denoising-py)
scikit_learn sklearn.linear_model.lasso_path sklearn.linear\_model.lasso\_path
=================================
sklearn.linear\_model.lasso\_path(*X*, *y*, *\**, *eps=0.001*, *n\_alphas=100*, *alphas=None*, *precompute='auto'*, *Xy=None*, *copy\_X=True*, *coef\_init=None*, *verbose=False*, *return\_n\_iter=False*, *positive=False*, *\*\*params*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/linear_model/_coordinate_descent.py#L192)
Compute Lasso path with coordinate descent.
The Lasso optimization function varies for mono and multi-outputs.
For mono-output tasks it is:
```
(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
```
For multi-output tasks it is:
```
(1 / (2 * n_samples)) * ||Y - XW||^2_Fro + alpha * ||W||_21
```
Where:
```
||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
```
i.e. the sum of norm of each row.
Read more in the [User Guide](../linear_model#lasso).
Parameters:
**X**{array-like, sparse matrix} of shape (n\_samples, n\_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If `y` is mono-output then `X` can be sparse.
**y**{array-like, sparse matrix} of shape (n\_samples,) or (n\_samples, n\_targets)
Target values.
**eps**float, default=1e-3
Length of the path. `eps=1e-3` means that `alpha_min / alpha_max = 1e-3`.
**n\_alphas**int, default=100
Number of alphas along the regularization path.
**alphas**ndarray, default=None
List of alphas where to compute the models. If `None` alphas are set automatically.
**precompute**‘auto’, bool or array-like of shape (n\_features, n\_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to `'auto'` let us decide. The Gram matrix can also be passed as argument.
**Xy**array-like of shape (n\_features,) or (n\_features, n\_targets), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
**copy\_X**bool, default=True
If `True`, X will be copied; else, it may be overwritten.
**coef\_init**ndarray of shape (n\_features, ), default=None
The initial values of the coefficients.
**verbose**bool or int, default=False
Amount of verbosity.
**return\_n\_iter**bool, default=False
Whether to return the number of iterations or not.
**positive**bool, default=False
If set to True, forces coefficients to be positive. (Only allowed when `y.ndim == 1`).
**\*\*params**kwargs
Keyword arguments passed to the coordinate descent solver.
Returns:
**alphas**ndarray of shape (n\_alphas,)
The alphas along the path where models are computed.
**coefs**ndarray of shape (n\_features, n\_alphas) or (n\_targets, n\_features, n\_alphas)
Coefficients along the path.
**dual\_gaps**ndarray of shape (n\_alphas,)
The dual gaps at the end of the optimization for each alpha.
**n\_iters**list of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha.
See also
[`lars_path`](sklearn.linear_model.lars_path#sklearn.linear_model.lars_path "sklearn.linear_model.lars_path")
Compute Least Angle Regression or Lasso path using LARS algorithm.
[`Lasso`](sklearn.linear_model.lasso#sklearn.linear_model.Lasso "sklearn.linear_model.Lasso")
The Lasso is a linear model that estimates sparse coefficients.
[`LassoLars`](sklearn.linear_model.lassolars#sklearn.linear_model.LassoLars "sklearn.linear_model.LassoLars")
Lasso model fit with Least Angle Regression a.k.a. Lars.
[`LassoCV`](sklearn.linear_model.lassocv#sklearn.linear_model.LassoCV "sklearn.linear_model.LassoCV")
Lasso linear model with iterative fitting along a regularization path.
[`LassoLarsCV`](sklearn.linear_model.lassolarscv#sklearn.linear_model.LassoLarsCV "sklearn.linear_model.LassoLarsCV")
Cross-validated Lasso using the LARS algorithm.
[`sklearn.decomposition.sparse_encode`](sklearn.decomposition.sparse_encode#sklearn.decomposition.sparse_encode "sklearn.decomposition.sparse_encode")
Estimator that can be used to transform signals into sparse linear combination of atoms from a fixed.
#### Notes
For an example, see [examples/linear\_model/plot\_lasso\_coordinate\_descent\_path.py](../../auto_examples/linear_model/plot_lasso_coordinate_descent_path#sphx-glr-auto-examples-linear-model-plot-lasso-coordinate-descent-path-py).
To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array.
Note that in certain cases, the Lars solver may be significantly faster to implement this functionality. In particular, linear interpolation can be used to retrieve model coefficients between the values output by lars\_path
#### Examples
Comparing lasso\_path and lars\_path with interpolation:
```
>>> import numpy as np
>>> from sklearn.linear_model import lasso_path
>>> X = np.array([[1, 2, 3.1], [2.3, 5.4, 4.3]]).T
>>> y = np.array([1, 2, 3.1])
>>> # Use lasso_path to compute a coefficient path
>>> _, coef_path, _ = lasso_path(X, y, alphas=[5., 1., .5])
>>> print(coef_path)
[[0. 0. 0.46874778]
[0.2159048 0.4425765 0.23689075]]
```
```
>>> # Now use lars_path and 1D linear interpolation to compute the
>>> # same path
>>> from sklearn.linear_model import lars_path
>>> alphas, active, coef_path_lars = lars_path(X, y, method='lasso')
>>> from scipy import interpolate
>>> coef_path_continuous = interpolate.interp1d(alphas[::-1],
... coef_path_lars[:, ::-1])
>>> print(coef_path_continuous([5., 1., .5]))
[[0. 0. 0.46915237]
[0.2159048 0.4425765 0.23668876]]
```
Examples using `sklearn.linear_model.lasso_path`
------------------------------------------------
[Lasso and Elastic Net](../../auto_examples/linear_model/plot_lasso_coordinate_descent_path#sphx-glr-auto-examples-linear-model-plot-lasso-coordinate-descent-path-py)
scikit_learn sklearn.metrics.precision_score sklearn.metrics.precision\_score
================================
sklearn.metrics.precision\_score(*y\_true*, *y\_pred*, *\**, *labels=None*, *pos\_label=1*, *average='binary'*, *sample\_weight=None*, *zero\_division='warn'*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/_classification.py#L1647)
Compute the precision.
The precision is the ratio `tp / (tp + fp)` where `tp` is the number of true positives and `fp` the number of false positives. The precision is intuitively the ability of the classifier not to label as positive a sample that is negative.
The best value is 1 and the worst value is 0.
Read more in the [User Guide](../model_evaluation#precision-recall-f-measure-metrics).
Parameters:
**y\_true**1d array-like, or label indicator array / sparse matrix
Ground truth (correct) target values.
**y\_pred**1d array-like, or label indicator array / sparse matrix
Estimated targets as returned by a classifier.
**labels**array-like, default=None
The set of labels to include when `average != 'binary'`, and their order if `average is None`. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in `y_true` and `y_pred` are used in sorted order.
Changed in version 0.17: Parameter `labels` improved for multiclass problem.
**pos\_label**str or int, default=1
The class to report if `average='binary'` and the data is binary. If the data are multiclass or multilabel, this will be ignored; setting `labels=[pos_label]` and `average != 'binary'` will report scores for that label only.
**average**{‘micro’, ‘macro’, ‘samples’, ‘weighted’, ‘binary’} or None, default=’binary’
This parameter is required for multiclass/multilabel targets. If `None`, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data:
`'binary'`:
Only report results for the class specified by `pos_label`. This is applicable only if targets (`y_{true,pred}`) are binary.
`'micro'`:
Calculate metrics globally by counting the total true positives, false negatives and false positives.
`'macro'`:
Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
`'weighted'`:
Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance; it can result in an F-score that is not between precision and recall.
`'samples'`:
Calculate metrics for each instance, and find their average (only meaningful for multilabel classification where this differs from [`accuracy_score`](sklearn.metrics.accuracy_score#sklearn.metrics.accuracy_score "sklearn.metrics.accuracy_score")).
**sample\_weight**array-like of shape (n\_samples,), default=None
Sample weights.
**zero\_division**“warn”, 0 or 1, default=”warn”
Sets the value to return when there is a zero division. If set to “warn”, this acts as 0, but warnings are also raised.
Returns:
**precision**float (if average is not None) or array of float of shape (n\_unique\_labels,)
Precision of the positive class in binary classification or weighted average of the precision of each class for the multiclass task.
See also
[`precision_recall_fscore_support`](sklearn.metrics.precision_recall_fscore_support#sklearn.metrics.precision_recall_fscore_support "sklearn.metrics.precision_recall_fscore_support")
Compute precision, recall, F-measure and support for each class.
[`recall_score`](sklearn.metrics.recall_score#sklearn.metrics.recall_score "sklearn.metrics.recall_score")
Compute the ratio `tp / (tp + fn)` where `tp` is the number of true positives and `fn` the number of false negatives.
[`PrecisionRecallDisplay.from_estimator`](sklearn.metrics.precisionrecalldisplay#sklearn.metrics.PrecisionRecallDisplay.from_estimator "sklearn.metrics.PrecisionRecallDisplay.from_estimator")
Plot precision-recall curve given an estimator and some data.
[`PrecisionRecallDisplay.from_predictions`](sklearn.metrics.precisionrecalldisplay#sklearn.metrics.PrecisionRecallDisplay.from_predictions "sklearn.metrics.PrecisionRecallDisplay.from_predictions")
Plot precision-recall curve given binary class predictions.
[`multilabel_confusion_matrix`](sklearn.metrics.multilabel_confusion_matrix#sklearn.metrics.multilabel_confusion_matrix "sklearn.metrics.multilabel_confusion_matrix")
Compute a confusion matrix for each class or sample.
#### Notes
When `true positive + false positive == 0`, precision returns 0 and raises `UndefinedMetricWarning`. This behavior can be modified with `zero_division`.
#### Examples
```
>>> from sklearn.metrics import precision_score
>>> y_true = [0, 1, 2, 0, 1, 2]
>>> y_pred = [0, 2, 1, 0, 0, 1]
>>> precision_score(y_true, y_pred, average='macro')
0.22...
>>> precision_score(y_true, y_pred, average='micro')
0.33...
>>> precision_score(y_true, y_pred, average='weighted')
0.22...
>>> precision_score(y_true, y_pred, average=None)
array([0.66..., 0. , 0. ])
>>> y_pred = [0, 0, 0, 0, 0, 0]
>>> precision_score(y_true, y_pred, average=None)
array([0.33..., 0. , 0. ])
>>> precision_score(y_true, y_pred, average=None, zero_division=1)
array([0.33..., 1. , 1. ])
>>> # multilabel classification
>>> y_true = [[0, 0, 0], [1, 1, 1], [0, 1, 1]]
>>> y_pred = [[0, 0, 0], [1, 1, 1], [1, 1, 0]]
>>> precision_score(y_true, y_pred, average=None)
array([0.5, 1. , 1. ])
```
Examples using `sklearn.metrics.precision_score`
------------------------------------------------
[Probability Calibration curves](../../auto_examples/calibration/plot_calibration_curve#sphx-glr-auto-examples-calibration-plot-calibration-curve-py)
[Precision-Recall](../../auto_examples/model_selection/plot_precision_recall#sphx-glr-auto-examples-model-selection-plot-precision-recall-py)
scikit_learn sklearn.metrics.pairwise.laplacian_kernel sklearn.metrics.pairwise.laplacian\_kernel
==========================================
sklearn.metrics.pairwise.laplacian\_kernel(*X*, *Y=None*, *gamma=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/metrics/pairwise.py#L1304)
Compute the laplacian kernel between X and Y.
The laplacian kernel is defined as:
```
K(x, y) = exp(-gamma ||x-y||_1)
```
for each pair of rows x in X and y in Y. Read more in the [User Guide](../metrics#laplacian-kernel).
New in version 0.17.
Parameters:
**X**ndarray of shape (n\_samples\_X, n\_features)
A feature array.
**Y**ndarray of shape (n\_samples\_Y, n\_features), default=None
An optional second feature array. If `None`, uses `Y=X`.
**gamma**float, default=None
If None, defaults to 1.0 / n\_features.
Returns:
**kernel\_matrix**ndarray of shape (n\_samples\_X, n\_samples\_Y)
The kernel matrix.
scikit_learn sklearn.datasets.make_sparse_uncorrelated sklearn.datasets.make\_sparse\_uncorrelated
===========================================
sklearn.datasets.make\_sparse\_uncorrelated(*n\_samples=100*, *n\_features=10*, *\**, *random\_state=None*)[[source]](https://github.com/scikit-learn/scikit-learn/blob/f3f51f9b6/sklearn/datasets/_samples_generator.py#L1335)
Generate a random regression problem with sparse uncorrelated design.
This dataset is described in Celeux et al [1]. as:
```
X ~ N(0, 1)
y(X) = X[:, 0] + 2 * X[:, 1] - 2 * X[:, 2] - 1.5 * X[:, 3]
```
Only the first 4 features are informative. The remaining features are useless.
Read more in the [User Guide](https://scikit-learn.org/1.1/datasets/sample_generators.html#sample-generators).
Parameters:
**n\_samples**int, default=100
The number of samples.
**n\_features**int, default=10
The number of features.
**random\_state**int, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See [Glossary](https://scikit-learn.org/1.1/glossary.html#term-random_state).
Returns:
**X**ndarray of shape (n\_samples, n\_features)
The input samples.
**y**ndarray of shape (n\_samples,)
The output values.
#### References
[1] G. Celeux, M. El Anbari, J.-M. Marin, C. P. Robert, “Regularization in regression: comparing Bayesian and frequentist methods in a poorly informative situation”, 2009.
qunit QUnit API QUnit API
=========
QUnit is a powerful, easy-to-use JavaScript unit test suite.
If you’re new to QUnit, check out [Getting Started](https://qunitjs.com/intro/).
QUnit has no dependencies and supports Node.js, SpiderMonkey, and all [major browsers](https://qunitjs.com/intro/#in-the-browser).
qunit QUnit.onUncaughtException() QUnit.onUncaughtException()
===========================
version added: [2.17.0](https://github.com/qunitjs/qunit/releases/tag/2.17.0)
Description
-----------
`QUnit.onUncaughtException( error )`
Handle a global error that should result in a failed test run.
| name | description |
| --- | --- |
| `error` (any) | Usually an `Error` object, but any other thrown or rejected value may be given as well. |
Examples
--------
```
const error = new Error('Failed to reverse the polarity of the neutron flow');
QUnit.onUncaughtException(error);
```
```
process.on('uncaughtException', QUnit.onUncaughtException);
```
```
window.addEventListener('unhandledrejection', function (event) {
QUnit.onUncaughtException(event.reason);
});
```
qunit QUnit.stack() QUnit.stack()
=============
version added: [1.19.0](https://github.com/qunitjs/qunit/releases/tag/1.19.0)
Description
-----------
`QUnit.stack( offset = 0 )`
Return a single line string representing the stacktrace (call stack).
| name | description |
| --- | --- |
| `offset` (number) | Set the stacktrace line offset. Defaults to `0` |
This method returns a single line string representing the stacktrace from where it was called. According to its offset argument, `QUnit.stack()` will return the correspondent line from the call stack.
The default offset is 0 and will return the current location where it was called.
Not all [browsers support retrieving stracktraces](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Error/Stack#Browser_compatibility). In those, `QUnit.stack()` will return `undefined`.
Examples
--------
The stacktrace line can be used on custom assertions and reporters. The following example [logs](../callbacks/qunit.log) the line of each passing assertion.
```
QUnit.log(function (details) {
if (details.result) {
// 5 is the line reference for the assertion method, not the following line.
console.log(QUnit.stack(5));
}
});
QUnit.test('foo', assert => {
// the log callback will report the position of the following line.
assert.true(true);
});
```
qunit QUnit.push() QUnit.push()
============
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
deprecated: [2.1.0](https://github.com/qunitjs/qunit/releases/tag/2.1.0)
Description
-----------
`QUnit.push( result, actual, expected, message )`
Report the result of a custom assertion.
This method is **deprecated** and it’s recommended to use [`pushResult`](../assert/pushresult) in the assertion context instead.
| name | description |
| --- | --- |
| `result` (boolean) | Result of the assertion |
| `actual` | Expression being tested |
| `expected` | Known comparison value |
| `message` (string) | A short description of the assertion |
`QUnit.push` reflects to the current running test, and it may leak assertions in asynchronous mode. Checkout [`assert.pushResult()`](../assert/pushresult) to set a proper custom assertion.
Invoking `QUnit.push` allows to create a readable expectation that is not defined by any of QUnit’s built-in assertions.
| programming_docs |
qunit QUnit.extend() QUnit.extend()
==============
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
deprecated: [2.12.0](https://github.com/qunitjs/qunit/releases/tag/2.12.0)
Description
-----------
`QUnit.extend( target, mixin )`
Copy the properties defined by a mixin object into a target object.
This method is **deprecated** and it’s recommended to use [`Object.assign()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/assign) instead.
| name | description |
| --- | --- |
| `target` | An object whose properties are to be modified |
| `mixin` | An object describing which properties should be modified |
This method will modify the `target` object to contain the “own” properties defined by the `mixin`. If the `mixin` object specifies the value of any attribute as `undefined`, this property will instead be removed from the `target` object.
Examples
--------
Use `QUnit.extend` to merge two objects.
```
QUnit.test('QUnit.extend', assert => {
const base = {
a: 1,
b: 2,
z: 3
};
QUnit.extend(base, {
b: 2.5,
c: 3,
z: undefined
});
assert.strictEqual(base.a, 1, 'Unspecified values are not modified');
assert.strictEqual(base.b, 2.5, 'Existing values are updated');
assert.strictEqual(base.c, 3, 'New values are defined');
assert.false('z' in base, 'Values specified as `undefined` are removed');
});
```
qunit QUnit.assert QUnit.assert
============
version added: [1.7.0](https://github.com/qunitjs/qunit/releases/tag/1.7.0)
Description
-----------
Namespace for QUnit assertion methods. This object is the prototype for the internal Assert class of which instances are passed as the argument to [`QUnit.test()`](../qunit/test) callbacks.
This object contains QUnit’s [built-in assertion methods](https://api.qunitjs.com/assert/), and may be extended by plugins to register additional assertion methods.
See [`assert.pushResult()`](../assert/pushresult) for how to create a custom assertion.
qunit QUnit.dump.parse() QUnit.dump.parse()
==================
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
`QUnit.dump.parse( data )`
Extensible data dumping and string serialization.
| name | description |
| --- | --- |
| `data` | Data structure or object to parse. |
This method does string serialization by parsing data structures and objects. It parses DOM elements to a string representation of their outer HTML. By default, nested structures will be displayed up to five levels deep. Anything beyond that is replaced by `[object Object]` and `[object Array]` placeholders.
If you need more or less output, change the value of `QUnit.dump.maxDepth`, representing how deep the elements should be parsed.
Changelog
---------
| | |
| --- | --- |
| [QUnit 2.1](https://github.com/qunitjs/qunit/releases/tag/2.1.0) | The `QUnit.jsDump` alias was removed. |
| [QUnit 1.15](https://github.com/qunitjs/qunit/releases/tag/1.15.0) | The `QUnit.jsDump` interface was renamed to `QUnit.dump`.The `QUnit.jsDump` alias is deprecated. |
Examples
--------
The following is an example from [grunt-contrib-qunit](https://github.com/gruntjs/grunt-contrib-qunit/blob/188a29af7817e1798fdd95f1ab7d3069231e4859/chrome/bridge.js#L42-L60), which sends results from QUnit (running in Headless Chrome) to a CLI tool.
```
QUnit.log(function (obj) {
var actual;
var expected;
if (!obj.result) {
// Format before sending
actual = QUnit.dump.parse(obj.actual);
expected = QUnit.dump.parse(obj.expected);
}
// ...
});
```
---
This example shows the formatted representation of a DOM element.
```
var qHeader = document.getElementById('qunit-header');
var parsed = QUnit.dump.parse(qHeader);
console.log(parsed);
// Logs: '<h1 id="qunit-header"></h1>'
```
---
Limit output to one or two levels
```
var input = {
parts: {
front: [],
back: []
}
};
QUnit.dump.maxDepth = 1;
console.log(QUnit.dump.parse(input));
// Logs: { "parts": [object Object] }
QUnit.dump.maxDepth = 2;
console.log(QUnit.dump.parse(input));
// Logs: { "parts": { "back": [object Array], "front": [object Array] } }
```
qunit QUnit.config.seed QUnit.config.seed
=================
version added: [1.23.0](https://github.com/qunitjs/qunit/releases/tag/1.23.0)
Description
-----------
Enable randomized ordering of tests.
| | |
| --- | --- |
| type | `string` or `boolean` or `undefined` |
| default | `undefined` |
This option is also available as [CLI option](https://qunitjs.com/cli/), and as URL query parameter in the browser.
When set to boolean true, or a string, QUnit will run tests in a [seeded-random order](https://en.wikipedia.org/wiki/Random_seed).
The provided string will be used as the seed in a pseudo-random number generator to ensure that results are reproducible. The randomization will also respect the <reorder> option if enabled and re-run failed tests first without randomizing them.
Randomly ordering your tests can help identify non-atomic tests which either depend on a previous test or are leaking state to subsequent tests.
If `seed` is boolean true (or set as URL query parameter without a value), then QUnit will generate on-demand a new random value to use as seed. You can then read the seed at runtime from the configuration value, and use it to reproduce the same test sequence later.
qunit QUnit.config.testTimeout QUnit.config.testTimeout
========================
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
Set a global default timeout in milliseconds after which a test will fail. This helps to detect async tests that are broken, and prevents tests from running indefinitely.
| | |
| --- | --- |
| type | `number` or `undefined` |
| default | `undefined` |
This can be overridden on a per-test basis via [assert.timeout()](../assert/timeout). If you don’t have per-test overrides, it is recommended to set this to a relatively high value (e.g. `30000` for 30 seconds) to avoid intermittent test failures from unrelated delays one might in a browser or CI service.
qunit QUnit.config.storage QUnit.config.storage
====================
version added: [2.1.0](https://github.com/qunitjs/qunit/releases/tag/2.1.0)
Description
-----------
The Storage object to use for remembering failed tests between runs.
| | |
| --- | --- |
| type | `object` or `undefined` |
| default | `globalThis.sessionStorage` |
This is mainly for use by the HTML Reporter, where `sessionStorage` will be used if supported by the browser.
While Node.js and other non-browser environments are not known to offer something like this by default, one can attach any preferred form of persistence by assigning an object that implements the [`Storage` interface methods](https://html.spec.whatwg.org/multipage/webstorage.html#the-storage-interface) of the Web Storage API.
qunit QUnit.config.urlConfig QUnit.config.urlConfig
======================
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
In the HTML Reporter, this array is used to generate additional input fields in the toolbar.
| | |
| --- | --- |
| type | `array` |
| default | `[]` |
This property controls which form controls to put into the QUnit toolbar. By default, the `noglobals` and `notrycatch` checkboxes are registered. By adding to this array, you can add your own checkboxes and select dropdowns.
Each array item should be an object shaped as follows:
```
({
id: string,
label: string,
tooltip: string, // optional
value: string | array | object // optional
});
```
* The `id` property is used as the key for storing the value under `QUnit.config`, and as URL query parameter.
* The `label` property is used as text label in the user interface.
* The optional `tooltip` property is used as the `title` attribute and should explain what the control is used for.
Each element should also have a `value` property controlling available options and rendering.
If `value` is undefined, the option will render as a checkbox. The corresponding URL parameter will be set to “true” when the checkbox is checked, and otherwise will be absent.
If `value` is a string, the option will render as a checkbox. The corresponding URL parameter will be set to the value when the checkbox is checked, and otherwise will be absent.
If `value` is an array, the option will render as a “select one” menu with an empty value as first default option, followed by one option for each item in the array. The corresponding URL parameter will be absent when the empty option is selected, and otherwise will be set to the value of the selected array item.
```
value = [ 'foobar', 'baz' ];
```
If `value` is an object, the option will render as a “select one” menu as for an array. The keys will be used as option values, and the values will be used as option display labels. The corresponding URL parameter will be absent when the empty option is selected, and otherwise will be set to the object key of the selected property.
```
value = {
foobar: 'Foo with bar',
baz: 'Baz'
};
```
Examples
--------
### Add toolbar checkbox
Add a new checkbox to the toolbar. You can then use the `QUnit.config.min` property in your code to implement a behaviour based on it.
```
QUnit.config.urlConfig.push({
id: 'min',
label: 'Minified source',
tooltip: 'Load minified source files instead of the regular unminified ones.'
});
```
### Add dropdown menu
Add a dropdown to the toolbar.
```
QUnit.config.urlConfig.push({
id: 'jquery',
label: 'jQuery version',
value: [ '1.7.2', '1.8.3', '1.9.1' ],
tooltip: 'Which jQuery version to test against.'
});
```
qunit QUnit.config.noglobals QUnit.config.noglobals
======================
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
Check the global object after each test and report new properties as failures.
| | |
| --- | --- |
| type | `boolean` |
| default | `false` |
Enable this option to let QUnit keep track of which global variables and properties exist on the global object (e.g. `window` in browsers). When new global properties are found, they will result in test failures to you make sure your application and your tests are not leaking any state.
This option can also be controlled via the [HTML Reporter](https://qunitjs.com/intro/#in-the-browser) interface.
qunit QUnit.config.hidepassed QUnit.config.hidepassed
=======================
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
In the HTML Reporter, hide results of passed tests.
| | |
| --- | --- |
| type | `boolean` |
| default | `false` |
This option can also be controlled via the [HTML Reporter](https://qunitjs.com/intro/#in-the-browser).
By default, the HTML Reporter will list (in collapsed form) the names of all passed tests. Enable this option, to only list failing tests.
qunit QUnit.config.reorder QUnit.config.reorder
====================
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
Allow re-running of previously failed tests out of order, before all other tests.
| | |
| --- | --- |
| type | `boolean` |
| default | `true` |
By default, QUnit will first re-run any tests that failed on a previous run. For large test suites, this can speed up your feedback cycle by a lot.
Note that this feature may lead to unexpected failures if you have non-atomic tests that rely on a very specific execution order. You should consider improving such tests, but this option allows you to disable the reordering behaviour.
When a previously failed test is running first, the HTML Reporter displays “*Rerunning previously failed test*” in the summary whereas just “*Running*” is displayed otherwise.
qunit QUnit.config.module QUnit.config.module
===================
version added: [1.8.0](https://github.com/qunitjs/qunit/releases/tag/1.8.0)
Description
-----------
Select a single test module to run by name.
| | |
| --- | --- |
| type | `string` or `undefined` |
| default | `undefined` |
This option can also be set by URL query parameter.
When specified, only a single module will be run if its name is a complete case-insensitive match. If no module name matches, then no tests will be run.
This option is undefined by default, which means all loaded test modules will be run.
See also:
* [QUnit.config.filter](filter)
* [QUnit.config.moduleId](moduleid)
Changelog
---------
| | |
| --- | --- |
| [QUnit 1.23](https://github.com/qunitjs/qunit/releases/tag/1.23.0) | The public config property was restored. |
| [QUnit 1.16](https://github.com/qunitjs/qunit/releases/tag/1.16.0) | The public config property was removed (the URL query parameter was unaffected). |
qunit QUnit.config.current QUnit.config.current
====================
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
Internal object representing the currently running test.
| | |
| --- | --- |
| type | `object` |
This property is not actually a configuration option, but is exposed under `QUnit.config` for use by plugins and other integrations. This offers access to QUnit’s internal `Test` object at runtime.
Internals may change without notice. When possible, use [QUnit.on](../callbacks/qunit.on) or [other callbacks](https://api.qunitjs.com/callbacks/) instead.
Example
-------
Access `QUnit.config.current.testName` to observe the currently running test’s name.
```
function whatsUp () {
console.log(QUnit.config.current.testName);
}
QUnit.test('example', assert => {
whatsUp();
assert.true(true);
});
```
qunit QUnit.config.requireExpects QUnit.config.requireExpects
===========================
version added: [1.7.0](https://github.com/qunitjs/qunit/releases/tag/1.7.0)
Description
-----------
Fail tests that don’t specify how many assertions they expect.
| | |
| --- | --- |
| type | `boolean` |
| default | `false` |
Enabling this option will cause tests to fail if they don’t call [`assert.expect()`](../assert/expect).
qunit QUnit.config.notrycatch QUnit.config.notrycatch
=======================
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
Disable handling of uncaught exceptions during tests.
| | |
| --- | --- |
| type | `boolean` |
| default | `false` |
This option can also be controlled via the [HTML Reporter](https://qunitjs.com/intro/#in-the-browser) interface, and is supported as URL query parameter.
By default, QUnit handles uncaught exceptions during test execution and reports them as test failures. This lets other tests continue running and allows reporters to summarise results.
Enabling this flag will disable this error handling, allowing you to more easily debug uncaught exceptions through developer tools.
qunit QUnit.config.failOnZeroTests QUnit.config.failOnZeroTests
============================
version added: [2.16.0](https://github.com/qunitjs/qunit/releases/tag/2.16.0)
Description
-----------
Whether to fail the test run if no tests were run.
| | |
| --- | --- |
| type | `boolean` |
| default | `true` |
By default, it is considered an error if no tests were loaded, or if no tests matched the current filter.
Set this option to `false` to let an empty test run result in a success instead.
qunit QUnit.config.autostart QUnit.config.autostart
======================
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
Control when the test run may start, e.g. after asynchronously loading test files with RequireJS, AMD, ES6 dynamic imports, or other means.
| | |
| --- | --- |
| type | `boolean` |
| default | `true` |
In the browser, QUnit by default waits for all `<script>` elements to finish loading (by means of the window `load` event). When using the QUnit CLI, it waits until the specified files are imported.
Set this property to `false` to instruct QUnit to wait longer, allowing you to load test files asynchronously. Remember to call [`QUnit.start()`](../qunit/start) once you’re ready for tests to begin running.
If you asynchronously load test files *without* disabling autostart, you may encounter this warning:
**warning**: Unexpected test after runEnd.
Examples
--------
### ESM Dynamic imports
This example uses the [import()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/import) operator to dynamically load ECMAScript module (ESM) files.
```
<script src="../lib/qunit.js"></script>
<script type="module" src="tests.js"></script>
```
```
// tests.js
QUnit.config.autostart = false;
Promise.all([
import('./foo.js'),
import('./bar.js')
]).then(function () {
QUnit.start();
});
```
### Loading with RequireJS
This example uses [RequireJS](https://requirejs.org/) to call a “require” function as defined by the [AMD specification](https://github.com/amdjs/amdjs-api/blob/master/require.md) (Asynchronous Module Definition).
```
QUnit.config.autostart = false;
require(
[
'tests/testModule1',
'tests/testModule2'
],
function () {
QUnit.start();
}
);
```
qunit QUnit.config.collapse QUnit.config.collapse
=====================
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
In the HTML Reporter, collapse the details of failing tests after the first one.
| | |
| --- | --- |
| type | `boolean` |
| default | `true` |
By default, QUnit’s HTML Reporter collapses consecutive failing tests showing only the details for the first failed test. The other tests can be expanded manually with a single click on the test title.
Set this option to `false` to expand the details for all failing tests.
qunit QUnit.config.altertitle QUnit.config.altertitle
=======================
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
In the HTML Reporter, whether to insert a success or failure symbol in the document title.
| | |
| --- | --- |
| type | `boolean` |
| default | `true` |
By default, QUnit updates `document.title` to insert a checkmark or cross symbol to indicate whether the test run passed or failed. This helps quickly spot from the tab bar whether a run passed, without opening it.
If you’re integration-testing code that makes changes to `document.title`, or otherwise conflicts with this feature, you can disable it.
qunit QUnit.config.testId QUnit.config.testId
===================
version added: [1.16.0](https://github.com/qunitjs/qunit/releases/tag/1.16.0)
Description
-----------
In the HTML Reporter, select one or more tests to run by their internal ID.
| | |
| --- | --- |
| type | `array` or `undefined` |
| default | `undefined` |
This option can be controlled via the [HTML Reporter](https://qunitjs.com/intro/#in-the-browser) interface.
This property allows QUnit to run specific tests by their internally hashed identifier. You can specify one or multiple tests to run. This option powers the “Rerun” button in the HTML Reporter.
See also:
* [QUnit.config.filter](filter)
* [QUnit.config.moduleId](moduleid)
qunit QUnit.config.scrolltop QUnit.config.scrolltop
======================
version added: [1.14.0](https://github.com/qunitjs/qunit/releases/tag/1.14.0)
Description
-----------
In the HTML Reporter, ensure the browser is scrolled to the top of the page when the tests are done.
| | |
| --- | --- |
| type | `boolean` |
| default | `true` |
By default, QUnit scrolls the browser to the top of the page when tests are done. This reverses any programmatic scrolling performed by the application or its tests.
Set this option to `false` to disable this behaviour, and thus leave the page in its final scroll position.
| programming_docs |
qunit QUnit.config.maxDepth QUnit.config.maxDepth
=====================
version added: [1.16.0](https://github.com/qunitjs/qunit/releases/tag/1.16.0)
Description
-----------
In the HTML Reporter, the depth up-to which an object will be serialized during the diff of an assertion failure.
| | |
| --- | --- |
| type | `number` |
| default | `5` |
To disable the depth limit, use a value of `-1`.
This is used by [`QUnit.dump.parse()`](../extension/qunit.dump.parse).
qunit QUnit.config.filter QUnit.config.filter
===================
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
Select tests to run based on a substring or pattern match.
| | |
| --- | --- |
| type | `string` or `undefined` |
| default | `undefined` |
This option is available as [CLI option](https://qunitjs.com/cli/), as control in the [HTML Reporter](https://qunitjs.com/intro/#in-the-browser), and supported as URL query parameter.
QUnit only runs tests of which the module name or test name are a case-insensitive substring match for the filter string. You can invert the filter by prefixing an exclamation mark (`!`) to the string, in which case we skip the matched tests, and run the tests that don’t match the filter.
You can also match via a regular expression by setting the filter to a regular expression literal, enclosed by slashes, such as `/(this|that)/`.
While substring filters are always **case-insensitive**, a regular expression is case-sensitive by default.
See also:
* [QUnit.config.module](module)
Examples
--------
### Substring filter
The below matches `FooBar` and `foo > bar`, because string matching is case-insensitive.
```
QUnit.config.filter = 'foo';
```
As inversed filter, the below skips `FooBar` and `foo > bar`, but runs `Bar` and `bar > sub`.
```
QUnit.config.filter = '!foo';
```
### Regular expression filter
The below matches `foo` but not `Foo`, because regexes are case-sensitive by default.
```
QUnit.config.filter = '/foo/';
```
The below matches both `foo` and `Foo`.
```
QUnit.config.filter = '/foo/i';
```
The below skips both `foo` and `Foo`.
```
QUnit.config.filter = '!/foo/i';
```
The below matches `foo`, `foo > sub`, and `foo.sub`, but skips `bar`, `bar.foo`, and `FooBar`.
```
QUnit.config.filter = '/^foo/';
```
qunit QUnit.config.fixture QUnit.config.fixture
====================
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
In the HTML Reporter, this HTML content will be rendered in the fixture container at the start of each test.
| | |
| --- | --- |
| type | `string` or `null` or `undefined` |
| default | `undefined` |
By default QUnit will observe the initial content of the `#qunit-fixture` element, and use that as the fixture content for all tests. Use this option to configure the fixture content through JavaScript instead.
To disable QUnit’s fixture resetting behaviour, set the option to `null`.
qunit QUnit.config.moduleId QUnit.config.moduleId
=====================
version added: [1.23.0](https://github.com/qunitjs/qunit/releases/tag/1.23.0)
Description
-----------
In the HTML Reporter, select one or more modules to run by their internal ID.
| | |
| --- | --- |
| type | `array` or `undefined` |
| default | `undefined` |
This option can be controlled via the [HTML Reporter](https://qunitjs.com/intro/#in-the-browser) interface.
Specify modules by their internally hashed identifier for a given module. You can specify one or multiple modules to run. This option powers the multi-select dropdown menu in the HTML Reporter.
See also:
* [QUnit.config.module](module)
* [QUnit.config.testId](testid)
qunit QUnit.testDone() QUnit.testDone()
================
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
`QUnit.testDone( callback )`
Register a callback to fire whenever a test ends. The callback may be an async function, or a function that return a promise which will be waited for before the next callback is handled.
| parameter | description |
| --- | --- |
| callback (function) | Callback to execute. Provides a single argument with the callback Details object |
### Details object
Passed to the callback:
| property | description |
| --- | --- |
| `name` (string) | Name of the current test |
| `module` (string) | Name of the current module |
| `failed` (number) | The number of failed assertions |
| `passed` (number) | The number of passed assertions |
| `total` (number) | The total number of assertions |
| `runtime` (number) | The execution time in milliseconds of the test, including beforeEach and afterEach calls |
| `skipped` (boolean) | Indicates whether or not the current test was skipped |
| `todo` (boolean) | Indicates whether or not the current test was a todo |
Examples
--------
Register a callback that logs results of a single test:
```
QUnit.testDone(details => {
const result = {
'Module name': details.module,
'Test name': details.name,
Assertions: {
Total: details.total,
Passed: details.passed,
Failed: details.failed
},
Skipped: details.skipped,
Todo: details.todo,
Runtime: details.runtime
};
console.log(JSON.stringify(result, null, 2));
});
```
qunit QUnit.moduleDone() QUnit.moduleDone()
==================
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
`QUnit.moduleDone( callback )`
Register a callback to fire whenever a module ends. The callback may be an async function, or a function that return a promise which will be waited for before the next callback is handled.
| parameter | description |
| --- | --- |
| callback (function) | Callback to execute. Provides a single argument with the callback Details object |
### Details object
Passed to the callback:
| property | description |
| --- | --- |
| `name` (string) | Name of this module |
| `failed` (number) | The number of failed assertions |
| `passed` (number) | The number of passed assertions |
| `total` (number) | The total number of assertions |
| `runtime` (number) | The execution time in milliseconds of this module |
Examples
--------
Register a callback that logs the module results
```
QUnit.moduleDone(details => {
console.log(`Finished running: ${details.name} Failed/total: ${details.failed}/${total}`);
});
```
qunit QUnit.testStart() QUnit.testStart()
=================
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
`QUnit.testStart( callback )`
Register a callback to fire whenever a test begins. The callback may be an async function, or a function that return a promise which will be waited for before the next callback is handled.
| parameter | description |
| --- | --- |
| callback (function) | Callback to execute. Provides a single argument with the callback Details object |
### Details object
Passed to the callback:
| property | description |
| --- | --- |
| `name` (string) | Name of the next test to run |
| `module` (string) | Name of the current module |
| `testId` (string) | Id of the next test to run |
| `previousFailure` (boolean) | Whether the next test failed on a previous run |
Examples
--------
Register a callback that logs the module and test name:
```
QUnit.testStart(details => {
console.log(`Now running: ${details.module} ${details.name}`);
});
```
qunit QUnit.moduleStart() QUnit.moduleStart()
===================
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
`QUnit.moduleStart( callback )`
Register a callback to fire whenever a module begins. The callback can return a promise that will be waited for before the next callback is handled.
| parameter | description |
| --- | --- |
| callback (function) | Callback to execute. Provides a single argument with the callback Details object |
### Details object
Passed to the callback:
| property | description |
| --- | --- |
| `name` (string) | Name of the next module to run |
Examples
--------
Register a callback that logs the module name
```
QUnit.moduleStart(details => {
console.log(`Now running: ${details.name}`);
});
```
qunit QUnit.begin() QUnit.begin()
=============
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
`QUnit.begin( callback )`
Register a callback to fire when the test run begins. The callback may be an async function, or a function that returns a Promise, which will be waited for before the next callback is handled.
The callback will be called once, before QUnit runs any tests.
| parameter | description |
| --- | --- |
| `callback` (function) | Callback to execute, called with a `details` object. |
### Details object
| property | description |
| --- | --- |
| `totalTests` (number) | Number of registered tests |
| `modules` (array) | List of registered modules,as `{ name: string, moduleId: string }` objects. |
Changelog
---------
| | |
| --- | --- |
| [QUnit 2.19.0](https://github.com/qunitjs/qunit/releases/tag/2.19.0) | Added `moduleId` to the `details.modules` objects. |
| [QUnit 1.16](https://github.com/qunitjs/qunit/releases/tag/1.16.0) | Added `details.modules` property, containing `{ name: string }` objects. |
| [QUnit 1.15](https://github.com/qunitjs/qunit/releases/tag/1.15.0) | Added `details.totalTests` property. |
Examples
--------
Get total number of tests known at the start.
```
QUnit.begin(details => {
console.log(`Test amount: ${details.totalTests}`);
});
```
Use async-await to wait for some asynchronous work:
```
QUnit.begin(async details => {
await someAsyncWork();
console.log(`Test amount: ${details.totalTests}`);
});
```
Using classic ES5 syntax:
```
QUnit.begin(function (details) {
console.log('Test amount:' + details.totalTests);
});
```
```
function someAsyncWork () {
return new Promise(function (resolve, reject) {
// do some async work
resolve();
});
}
QUnit.begin(function (details) {
return someAsyncWork().then(function () {
console.log('Test amount:' + details.totalTests);
});
});
```
qunit QUnit.done() QUnit.done()
============
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
`QUnit.done( callback )`
Register a callback to fire when the test run has ended. The callback may be an async function, or a function that return a Promise which will be waited for before the next callback is handled.
| parameter | description |
| --- | --- |
| `callback` (function) | Callback to execute, called with a `details` object: |
### Details object
| property | description |
| --- | --- |
| `failed` (number) | Number of failed assertions |
| `passed` (number) | Number of passed assertions |
| `total` (number) | Total number of assertions |
| `runtime` (number) | Duration of the test run in milliseconds |
Use of `details` is **deprecated** and it’s recommended to use [`QUnit.on('runEnd')`](qunit.on#the-runend-event) instead.
Caveats:
* This callback reports the **internal assertion count**.
* The default browser and CLI interfaces for QUnit and other popular test frameworks, and most CI integrations, report the number of tests. Reporting the number *assertions* may be confusing to developers.
* Failed assertions of a [`test.todo()`](../qunit/test.todo) test are reported exactly as such. While rare, this means that a test run and all tests within it may be reported as passing, while internally there were some failed assertions. Unfortunately, this internal detail is exposed for compatibility reasons.
Changelog
---------
| | |
| --- | --- |
| [QUnit 2.2](https://github.com/qunitjs/qunit/releases/tag/2.2.0) | Deprecate `details` parameter in favour of `QUnit.on('runEnd')`. |
Examples
--------
Register a callback that logs internal assertion counts.
```
QUnit.done(function (details) {
console.log(
'Total: ' + details.total + ' Failed: ' + details.failed +
' Passed: ' + details.passed + ' Runtime: ' + details.runtime
);
});
```
qunit QUnit.log() QUnit.log()
===========
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
`QUnit.log( callback )`
Register a callback to fire whenever an assertion completes.
**NOTE: The QUnit.log() callback does not handle promises and MUST be synchronous.**
| parameter | description |
| --- | --- |
| callback (function) | Callback to execute. Provides a single argument with the callback Details object |
### Details object
Passed to the callback:
| property | description |
| --- | --- |
| `result` (boolean) | The boolean result of an assertion, `true` means passed, `false` means failed. |
| `actual` | One side of a comparison assertion. Can be *undefined* when `ok()` is used. |
| `expected` | One side of a comparison assertion. Can be *undefined* when `ok()` is used. |
| `message` (string) | A string description provided by the assertion. |
| `source` (string) | The associated stacktrace, either from an exception or pointing to the source of the assertion. Depends on browser support for providing stacktraces, so can be undefined. |
| `module` (string) | The test module name of the assertion. If the assertion is not connected to any module, the property’s value will be *undefined*. |
| `name` (string) | The test block name of the assertion. |
| `runtime` (number) | The time elapsed in milliseconds since the start of the containing [`QUnit.test()`](../qunit/test), including setup. |
| `todo` (boolean) | Indicates whether or not this assertion was part of a todo test. |
Examples
--------
Register a callback that logs the assertion result and its message:
```
QUnit.log(details => {
console.log(`Log: ${details.result}, ${details.message}`);
});
```
---
Log the module name and test result whenever an assertion fails:
```
QUnit.log(details => {
if (details.result) {
return;
}
let output = `[FAILED] ${details.module} > ${details.name}`;
if (details.message) {
output += `: ${details.message}`;
}
if (details.actual) {
output += `\nexpected: ${details.expected}\nactual: ${details.actual}`;
}
if (details.source) {
output += `\n${details.source}`;
}
console.log(output);
});
```
qunit QUnit.on() QUnit.on()
==========
version added: [2.2.0](https://github.com/qunitjs/qunit/releases/tag/2.2.0)
Description
-----------
`QUnit.on( eventName, callback )`
Register a callback to fire whenever a specified event is emitted.
This API implements the [js-reporters CRI standard](https://github.com/js-reporters/js-reporters/blob/v2.1.0/spec/cri-draft.adoc), and is the primary interface for use by continuous integration plugins and other reporting software.
| type | parameter | description |
| --- | --- | --- |
| `string` | `eventName` | Name of an event. |
| `Function` | `callback` | A callback function. |
The `runStart` event
--------------------
The `runStart` event indicates the beginning of a test run. It is emitted exactly once, and before any other events.
| | | |
| --- | --- | --- |
| `Object` | `testCounts` | Aggregate counts about tests. |
| `number` | `testCounts.total` | Total number of registered tests. |
```
QUnit.on('runStart', runStart => {
console.log(`Test plan: ${runStart.testCounts.total}`);
});
```
The `suiteStart` event
----------------------
The `suiteStart` event indicates the beginning of a module. It is eventually be followed by a corresponding `suiteEnd` event.
| | | |
| --- | --- | --- |
| `string` | `name` | Name of the module. |
| `Array<string>` | `fullName` | List of one or more strings, containing (in order) the names of any ancestor modules and the name of the current module. |
```
QUnit.on('suiteStart', suiteStart => {
console.log('suiteStart', suiteStart);
// name: 'my module'
// fullName: ['grandparent', 'parent', 'my module']
});
```
The `suiteEnd` event
--------------------
The `suiteEnd` event indicates the end of a module. It is emitted after its corresponding `suiteStart` event.
| | | |
| --- | --- | --- |
| `string` | `name` | Name of the module. |
| `Array<string>` | `fullName` | List of one or more strings, containing (in order) the names of any ancestor modules and the name of the current module. |
| `string` | `status` | Aggregate result of tests in this module, one of:`failed`: at least one test has failed; `passed`: there were no failing tests, which means there were only tests with a passed, skipped, or todo status. |
| `number` | `runtime` | Duration of the module in milliseconds. |
```
QUnit.on('suiteEnd', suiteEnd => {
console.log(suiteEnd);
// …
});
```
The `testStart` event
---------------------
The `testStart` event indicates the beginning of a test. It is eventually followed by a corresponding `testEnd` event.
| | | |
| --- | --- | --- |
| `string` | `name` | Name of the test. |
| `string|null` | `moduleName` | The module the test belongs to, or null for a global test. |
| `Array<string>` | `fullName` | List (in order) of the names of any ancestor modules and the name of the test itself. |
```
QUnit.on('testStart', testStart => {
console.log(testStart);
// name: 'my test'
// moduleName: 'my module'
// fullName: ['parent', 'my module', 'my test']
// name: 'global test'
// moduleName: null
// fullName: ['global test']
});
```
The `testEnd` event
-------------------
The `testEnd` event indicates the end of a test. It is emitted after its corresponding `testStart` event.
Properties of a testEnd object:
| | | |
| --- | --- | --- |
| `string` | `name` | Name of the test. |
| `string|null` | `moduleName` | The module the test belongs to, or null for a global test. |
| `Array<string>` | `fullName` | List (in order) of the names of any ancestor modules and the name of the test itself. |
| `string` | `status` | Result of the test, one of:`passed`: all assertions passed or no assertions found;`failed`: at least one assertion failed or it is a [todo test](../qunit/test.todo) that no longer has any failing assertions;`skipped`: the test was intentionally not run; or`todo`: the test is “todo” and still has a failing assertion. |
| `number` | `runtime` | Duration of the test in milliseconds. |
| `Array<FailedAssertion>` | `errors` | For tests with status `failed` or `todo`, there will be at least one failed assertion. However, the list may be empty if the status is `failed` due to a “todo” test having no failed assertions.Note that all negative test outcome communicate their details in this manner. For example, timeouts, uncaught errors, and [global pollution](../config/noglobals) also synthesize a failed assertion. |
Properties of a FailedAssertion object:
| | | |
| --- | --- | --- |
| `boolean` | `passed` | False for a failed assertion. |
| `string|undefined` | `message` | Description of what the assertion checked. |
| `any` | `actual` | The actual value passed to the assertion. |
| `any` | `expected` | The expected value passed to the assertion. |
| `string|undefined` | `stack` | Stack trace, may be undefined if the result came from an old web browsers. |
```
QUnit.on('testEnd', testEnd => {
if (testEnd.status === 'failed') {
console.error('Failed! ' + testEnd.fullName.join(' > '));
testEnd.errors.forEach(assertion => {
console.error(assertion);
// message: speedometer
// actual: 75
// expected: 88
// stack: at dmc.test.js:12
});
}
});
```
The `runEnd` event
------------------
The `runEnd` event indicates the end of a test run. It is emitted exactly once.
| | | |
| --- | --- | --- |
| `string` | `status` | Aggregate result of all tests, one of:`failed`: at least one test failed or a global error occurred;`passed`: there were no failed tests, which means there were only tests with a passed, skipped, or todo status. If [`QUnit.config.failOnZeroTests`](../config/failonzerotests) is disabled, then the run may also pass if there were no tests. |
| `Object` | `testCounts` | Aggregate counts about tests: |
| `number` | `testCounts.passed` | Number of passed tests. |
| `number` | `testCounts.failed` | Number of failed tests. |
| `number` | `testCounts.skipped` | Number of skipped tests. |
| `number` | `testCounts.todo` | Number of todo tests. |
| `number` | `testCounts.total` | Total number of tests, equal to the sum of the above properties. |
| `number` | `runtime` | Total duration of the run in milliseconds. |
```
QUnit.on('runEnd', runEnd => {
console.log(`Passed: ${runEnd.passed}`);
console.log(`Failed: ${runEnd.failed}`);
console.log(`Skipped: ${runEnd.skipped}`);
console.log(`Todo: ${runEnd.todo}`);
console.log(`Total: ${runEnd.total}`);
});
```
| programming_docs |
qunit assert.async() assert.async()
==============
version added: [1.16.0](https://github.com/qunitjs/qunit/releases/tag/1.16.0)
Description
-----------
`async( count = 1 )`
Instruct QUnit to wait for an asynchronous operation.
| name | description |
| --- | --- |
| `count` (number) | Number of expected calls. Defaults to `1`. |
`assert.async()` returns a callback function and pauses test processing until the callback function is called. The callback will throw an `Error` if it is invoked more often than the required call count.
This replaces functionality previously provided by `QUnit.stop()` and [`QUnit.start()`](../qunit/start).
Examples
--------
### Wait for callback
Tell QUnit to wait for the `done()` call from a callback.
```
function fetchDouble (num, callback) {
const double = num * 2;
callback(double);
}
QUnit.test('async example', assert => {
const done = assert.async();
fetchDouble(21, res => {
assert.strictEqual(res, 42, 'Result');
done();
});
});
```
### Wait for multiple callbacks
Call `assert.async()` multiple times to wait for multiple async operations. Each `done` callback must be called exactly once for the test to pass.
```
QUnit.test('two async calls', assert => {
const done1 = assert.async();
const done2 = assert.async();
fetchDouble(3, res => {
assert.strictEqual(res, 6, 'double of 3');
done1();
});
fetchDouble(9, res => {
assert.strictEqual(res, 18, 'double of 9');
done2();
});
});
```
### Require multiple calls
The `count` parameter can be used to require multiple calls to the same callback. In the below example, the test passes after exactly three calls.
```
function uploadBatch (batch, notify, complete) {
batch.forEach((item) => {
// Do something with item
notify();
});
complete(null);
}
QUnit.test('multiple calls example', assert => {
assert.timeout(1000);
const notify = assert.async(3);
const done = assert.async();
uploadBatch(
['a', 'b', 'c'],
notify,
(err) => {
assert.strictEqual(err, null, 'complete error parameter');
done();
}
);
});
```
qunit assert.notStrictEqual() assert.notStrictEqual()
=======================
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
`notStrictEqual( actual, expected, message = "" )`
A strict comparison, checking for inequality.
| name | description |
| --- | --- |
| `actual` | Expression being tested |
| `expected` | Known comparison value |
| `message` (string) | Short description |
The `notStrictEqual` assertion uses the strict inverted comparison operator (`!==`) to compare the actual and expected arguments. When they aren’t equal, the assertion passes; otherwise, it fails. When it fails, both actual and expected values are displayed in the test result, in addition to a given message.
[`assert.equal()`](equal) can be used to test equality.
[`assert.strictEqual()`](strictequal) can be used to test strict equality.
Examples
--------
```
QUnit.test('example', assert => {
const result = '2';
// succeeds, while the number 2 and string 2 are similar, they are strictly different.
assert.notStrictEqual(result, 2);
});
```
qunit assert.propEqual() assert.propEqual()
==================
version added: [1.11.0](https://github.com/qunitjs/qunit/releases/tag/v1.11.0)
Description
-----------
`propEqual( actual, expected, message = "" )`
Compare an object’s own properties using a strict comparison.
| name | description |
| --- | --- |
| `actual` | Expression being tested |
| `expected` | Known comparison value |
| `message` (string) | Short description of the actual expression |
The `propEqual` assertion only compares an object’s own properties. This means the expected value does not need to be an instance of the same class or otherwise inherit the same prototype. This is in contrast with [`assert.deepEqual()`](deepequal).
This assertion fails if the values differ, or if there are extra properties, or if some properties are missing.
This method is recursive and can compare any nested or complex object via a plain object.
See also
--------
* Use [`assert.propContains()`](propcontains) to only check a subset of properties.
* Use [`assert.notPropEqual()`](notpropequal) to test for the inequality of object properties instead.
Examples
--------
Compare the property values of two objects.
```
QUnit.test('example', assert => {
class Foo {
constructor () {
this.x = 1;
this.y = 2;
}
walk () {}
run () {}
}
const foo = new Foo();
// succeeds, own properties are strictly equal,
// and inherited properties (such as which constructor) are ignored.
assert.propEqual(foo, {
x: 1,
y: 2
});
});
```
Using classic ES5 syntax:
```
QUnit.test('example', function (assert) {
function Foo () {
this.x = 1;
this.y = 2;
}
Foo.prototype.walk = function () {};
Foo.prototype.run = function () {};
var foo = new Foo();
// succeeds, own properties are strictly equal.
var expected = {
x: 1,
y: 2
};
assert.propEqual(foo, expected);
});
```
qunit assert.false() assert.false()
==============
version added: [2.11.0](https://github.com/qunitjs/qunit/releases/tag/2.11.0)
Description
-----------
`false( actual, message = "" )`
A strict comparison that passes if the first argument is boolean `false`.
| name | description |
| --- | --- |
| `actual` | Expression being tested |
| `message` (string) | Short description |
`false` requires just one argument. If the argument evaluates to false, the assertion passes; otherwise, it fails.
This method is similar to the `assertFalse()` method found in xUnit-style frameworks.
[`assert.true()`](true) can be used to explicitly test for a true value.
Examples
--------
```
QUnit.test('example', assert => {
// success
assert.false(false, 'boolean false');
// failure
assert.false('foo', 'non-empty string');
assert.false('', 'empty string');
assert.false(0, 'number zero');
assert.false(true, 'boolean true');
assert.false(NaN, 'NaN value');
assert.false(null, 'null value');
assert.false(undefined, 'undefined value');
});
```
qunit assert.pushResult() assert.pushResult()
===================
version added: [1.22.0](https://github.com/qunitjs/qunit/releases/tag/1.22.0)
Description
-----------
`pushResult( data )`
Report the result of a custom assertion.
| name | description |
| --- | --- |
| `data.result` (boolean) | Result of the assertion |
| `data.actual` | Expression being tested |
| `data.expected` | Known comparison value |
| `data.message` (string or undefined) | Short description of the assertion |
Examples
--------
If you need to express an expectation that is not abstracted by a built-in QUnit assertion, you can perform your own logic ad-hoc in an expression, and then pass two directly comparable values to [`assert.strictEqual()`](strictequal), or pass your own representative boolean result to [`assert.true()`](true).
```
QUnit.test('bad example of remainder', assert => {
const result = 4;
const actual = (result % 3) === 2;
assert.true(actual, 'remainder');
// In case of failure:
// > Actual: false
// > Expected: true
//
// No mention of the actual remainder.
// No mention of the expected value.
});
QUnit.test('good example of remainder', assert => {
const result = 4;
assert.strictEqual(result % 3, 2, 'remainder');
// In case of failure:
// > Actual: 1
// > Expected: 2
});
QUnit.test('bad example of between', assert => {
const actual = 3;
const isBetween = (actual >= 1 && actual <= 10);
assert.true(isBetween, 'result between 1 and 10');
// In case of failure:
// > Actual: false
// > Expected: true
//
// No mention of the actual remainder.
// No mention of the expected value.
// Cannot be expressed in a useful way with strictEqual()
});
```
### Custom assertion
With a custom assertion method, you can control how an assertion should be evaluated, separately from how its actual and expected values are described in case of a failure.
For example:
```
QUnit.assert.between = function (actual, from, to, message) {
const isBetween = (actual >= from && actual <= to);
this.pushResult({
result: isBetween,
actual: actual,
expected: `between ${from} and ${to} inclusive`,
message: message
});
};
QUnit.test('custom assertion example', assert => {
const result = 3;
assert.between(result, 1, 10, 'result');
// Example of failure if result is out of range
// > actual: 42
// > expected: between 1 and 10
});
```
qunit assert.notPropEqual() assert.notPropEqual()
=====================
version added: [1.11.0](https://github.com/qunitjs/qunit/releases/tag/v1.11.0)
Description
-----------
`notPropEqual( actual, expected, message = "" )`
Compare an object’s own properties using a strict inequality comparison.
| name | description |
| --- | --- |
| `actual` | Expression being tested |
| `expected` | Known comparison value |
| `message` (string) | Short description |
The `notPropEqual` assertion compares only an object’s own properties, using the strict inequality operator (`!==`).
The test passes if there are properties with different values, or extra properties, or missing properties.
See also
--------
* Use [`assert.notPropContains()`](notpropcontains) to only check for the absence or inequality of some properties.
* Use [`assert.propEqual()`](propequal) to test for equality of properties instead.
Examples
--------
Compare the values of two objects properties.
```
QUnit.test('example', assert => {
class Foo {
constructor () {
this.x = '1';
this.y = 2;
}
walk () {}
run () {}
}
const foo = new Foo();
// succeeds, only own property values are compared (using strict equality),
// and property "x" is indeed not equal (string instead of number).
assert.notPropEqual(foo, {
x: 1,
y: 2
});
});
```
qunit assert.notPropContains() assert.notPropContains()
========================
version added: [2.18.0](https://github.com/qunitjs/qunit/releases/tag/2.18.0)
Description
-----------
`notPropContains( actual, expected, message = "" )`
Check that an object does not contain certain properties.
| name | description |
| --- | --- |
| `actual` | Expression being tested |
| `expected` | Known comparison value |
| `message` (string) | Short description |
The `notPropContains` assertion compares the subset of properties in the expected object, and tests that these keys are either absent or hold a value that is different according to a strict equality comparison.
This method is recursive and allows partial comparison of nested objects as well.
See also
--------
* Use [`assert.propContains()`](propcontains) to test for the presence and equality of properties instead.
Examples
--------
```
QUnit.test('example', assert => {
const result = {
foo: 0,
vehicle: {
timeCircuits: 'on',
fluxCapacitor: 'fluxing',
engine: 'running'
},
quux: 1
};
// succeeds, property "timeCircuits" is actually "on"
assert.notPropContains(result, {
vehicle: {
timeCircuits: 'off'
}
});
// succeeds, property "wings" is not in the object
assert.notPropContains(result, {
vehicle: {
wings: 'flapping'
}
});
function Point (x, y) {
this.x = x;
this.y = y;
}
assert.notPropContains(
new Point(10, 20),
{ z: 30 }
);
const nested = {
north: [ /* ... */ ],
east: new Point(10, 20),
south: [ /* ... */ ],
west: [ /* ... */ ]
};
assert.notPropContains(nested, { east: new Point(88, 42) });
assert.notPropContains(nested, { east: { x: 88 } });
});
```
qunit assert.expect() assert.expect()
===============
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
`expect( amount )`
Specify how many assertions are expected in a test.
| name | description |
| --- | --- |
| `amount` | Number of expected assertions in this test |
This is most commonly used as `assert.expect(0)`, which indicates that a test may pass without making any assertions. This means the test is only used to verify that the code runs to completion, without any uncaught errors. This is is essentially the inverse of [`assert.throws()`](throws).
This can also be used to explicitly require a certain number of assertions to be recorded in a given test. If afterwards the number of assertions does not match the expected count, the test will fail.
It is recommended to test asynchronous code with [`assert.step()`](step) or [`assert.async()`](async) instead.
Examples
--------
### Example: No assertions
A test without any assertions:
```
QUnit.test('example', function (assert) {
assert.expect(0);
var android = new Robot();
android.up(2);
android.down(2);
android.left();
android.right();
android.left();
android.right();
android.attack();
android.jump();
});
```
### Example: Custom assert
If you use a generic assertion library that throws when an expectation is not met, you can use `assert.expect(0)` if there are no other assertions needed in the test.
```
QUnit.test('example', function (assert) {
assert.expect(0);
var android = new Robot(database);
android.run();
database.assertNoOpenConnections();
});
```
### Example: Explicit count
Require an explicit assertion count.
```
QUnit.test('example', function (assert) {
assert.expect(2);
function calc (x, operation) {
return operation(x);
}
let result = calc(2, function (x) {
assert.true(true, 'calc() calls operation function');
return x * x;
});
assert.strictEqual(result, 4, '2 squared equals 4');
});
```
qunit assert.propContains() assert.propContains()
=====================
version added: [2.18.0](https://github.com/qunitjs/qunit/releases/tag/2.18.0)
Description
-----------
`propContains( actual, expected, message = "" )`
Check that an object contains certain properties.
| name | description |
| --- | --- |
| `actual` | Expression being tested |
| `expected` | Known comparison value |
| `message` (string) | Short description of the actual expression |
The `propContains` assertion compares only the **subset** of properties in the expected object, and tests that these keys exist as own properties with strictly equal values.
This method is recursive and allows partial comparison of nested objects as well.
See also
--------
* Use [`assert.propEqual()`](propequal) to compare all properties, considering extra properties as unexpected.
* Use [`assert.notPropContains()`](notpropcontains) to test for the absence or inequality of properties.
Examples
--------
```
QUnit.test('example', assert => {
const result = {
foo: 0,
vehicle: {
timeCircuits: 'on',
fluxCapacitor: 'fluxing',
engine: 'running'
},
quux: 1
};
assert.propContains(result, {
foo: 0,
vehicle: { fluxCapacitor: 'fluxing' }
});
function Point (x, y) {
this.x = x;
this.y = y;
}
assert.propContains(
new Point(10, 20),
{ y: 20 }
);
assert.propContains(
[ 'a', 'b' ],
{ 1: 'b' }
);
const nested = {
north: [ /* ... */ ],
east: new Point(10, 20),
south: [ /* ... */ ],
west: [ /* ... */ ]
};
assert.propContains(nested, { east: new Point(10, 20) });
assert.propContains(nested, { east: { x: 10, y: 20 } });
assert.propContains(nested, { east: { x: 10 } });
});
```
qunit assert.verifySteps() assert.verifySteps()
====================
version added: [2.2.0](https://github.com/qunitjs/qunit/releases/tag/2.2.0)
Description
-----------
`verifySteps( steps, message = "" )`
Verify the presence and exact order of previously marked steps in a test.
| name | description |
| --- | --- |
| `steps` (array) | List of strings |
| `message` (string) | Short description |
The Step API provides an easy way to verify execution logic to a high degree of accuracy and precision, whether for asynchronous code, event-driven code, or callback-driven code.
For example, you can mark steps to observe and validate whether parts of your code are reached correctly, or to check the frequency (how often) an asynchronous code path is executed. You can also capture any unexpected steps, which are automatically detected and shown as part of the test failure.
This assertion compares a given array of string values to a list of previously recorded steps, as marked via previous calls to [`assert.step()`](step).
Calling `verifySteps()` will clear and reset the internal list of steps. This allows multiple independent sequences of `assert.step()` to exist within the same test.
Refer to the below examples and learn how to use the Step API in your test suite.
Examples
--------
### Test event-based interface
This example uses a class based on an [`EventEmitter`](https://nodejs.org/api/events.html), such as the one provided by Node.js and other environments:
```
QUnit.test('good example', async assert => {
const maker = new WordMaker();
maker.on('start', () => {
assert.step('start');
});
maker.on('data', (word) => {
assert.step(word);
});
maker.on('end', () => {
assert.step('end');
});
maker.on('error', message => {
assert.step('error: ' + message);
});
await maker.process('3.1');
assert.verifySteps(['start', '3', 'point', '1', 'end']);
});
```
When approaching this scenario **without the Step API** one might be tempted to place comparison checks directly inside event callbacks. It is considered an anti-pattern to make dummy assertions in callbacks that the test does not have control over. This creates loose assurances, and can easily cause false positives (a callback might not run, run out of order, or run multiple times). It also offers rather limited debugging information.
```
// WARNING: This is a BAD example
QUnit.test('bad example 1', async assert => {
const maker = new WordMaker();
maker.on('start', () => {
assert.true(true, 'start');
});
maker.on('middle', () => {
assert.true(true, 'middle');
});
maker.on('end', () => {
assert.true(true, 'end');
});
maker.on('error', () => {
assert.true(false, 'error');
});
await maker.process();
});
```
A less fragile approach could involve a local array that we check afterwards with [`deepEqual`](deepequal). This catches out-of-order issues, unexpected values, and duplicate values. It also provides detailed debugging information in case of problems. The below is in essence how the Step API works:
```
QUnit.test('manual example without Step API', async assert => {
const values = [];
const maker = new WordMaker();
maker.on('start', () => {
values.push('start');
});
maker.on('middle', () => {
values.push('middle');
});
maker.on('end', () => {
values.push('end');
});
maker.on('error', () => {
values.push('error');
});
await maker.process();
assert.deepEqual(values, ['start', 'middle', 'end']);
});
```
### Test publish/subscribe system
Use the **Step API** to verify messages received in a Pub-Sub channel or topic.
```
QUnit.test('good example', assert => {
const publisher = new Publisher();
const subscriber1 = (message) => assert.step(`Sub 1: ${message}`);
const subscriber2 = (message) => assert.step(`Sub 2: ${message}`);
publisher.subscribe(subscriber1);
publisher.subscribe(subscriber2);
publisher.publish('Hello!');
publisher.unsubscribe(subscriber1);
publisher.publish('World!');
assert.verifySteps([
'Sub 1: Hello!',
'Sub 2: Hello!',
'Sub 2: World!'
]);
});
```
### Multiple steps verifications in one test
The internal buffer of observed steps is automatically reset when calling `verifySteps()`.
```
QUnit.test('multiple verifications example', assert => {
assert.step('one');
assert.step('two');
assert.verifySteps(['one', 'two']);
assert.step('three');
assert.step('four');
assert.verifySteps(['three', 'four']);
});
```
| programming_docs |
qunit assert.timeout() assert.timeout()
================
version added: [2.4.0](https://github.com/qunitjs/qunit/releases/tag/2.4.0)
Description
-----------
`timeout( duration )`
Set how long to wait for async operations to finish.
| name | description |
| --- | --- |
| `duration` (number) | The length of time to wait, in milliseconds. |
This assertion defines how long to wait (at most) in the current test. It overrides [`QUnit.config.testTimeout`](../config/testtimeout) on a per-test basis.
The timeout length only applies when a test actually involves asynchronous functions or promises. If `0` is passed, then awaiting or returning any Promise may fail the test.
If `assert.timeout()` is called after a different timeout is already set, the old timeout will be cleared and the new duration will be used to start a new timer.
Examples
--------
```
QUnit.test('wait for an event', assert => {
assert.timeout(1000); // Timeout after 1 second
const done = assert.async();
const adder = new NumberAdder();
adder.on('ready', res => {
assert.strictEqual(res, 12);
done();
});
adder.run([ 1, 1, 2, 3, 5 ]);
});
```
```
QUnit.test('wait for an async function', async assert => {
assert.timeout(500); // Timeout after 0.5 seconds
const result = await asyncAdder(5, 7);
assert.strictEqual(result, 12);
});
```
Using classic ES5 syntax:
```
QUnit.test('wait for a returned promise', function (assert) {
assert.timeout(500); // Timeout after 0.5 seconds
var promise = asyncAdder(5, 7);
return promise.then(function (result) {
assert.strictEqual(result, 12);
});
});
```
qunit assert.true() assert.true()
=============
version added: [2.11.0](https://github.com/qunitjs/qunit/releases/tag/2.11.0)
Description
-----------
`true( actual, message = "" )`
A strict comparison that passes if the first argument is boolean `true`.
| name | description |
| --- | --- |
| `actual` | Expression being tested |
| `message` (string) | Short description of the actual expression |
`true()` requires just one argument. If the argument evaluates to true, the assertion passes; otherwise, it fails.
This method is similar to the `assertTrue()` method found in xUnit-style frameworks.
[`assert.false()`](false) can be used to explicitly test for a false value.
Examples
--------
```
QUnit.test('example', assert => {
// success
assert.true(true, 'boolean true');
// failure
assert.true('foo', 'non-empty string');
assert.true('', 'empty string');
assert.true(0, 'number zero');
assert.true(false, 'boolean false');
assert.true(NaN, 'NaN value');
assert.true(null, 'null value');
assert.true(undefined, 'undefined value');
});
```
qunit assert.notOk() assert.notOk()
==============
version added: [1.18.0](https://github.com/qunitjs/qunit/releases/tag/1.18.0)
Description
-----------
`notOk( state, message = "" )`
A boolean check that passes when the first argument is falsy.
| name | description |
| --- | --- |
| `state` | Expression being tested |
| `message` (string) | Short description |
This assertion requires only one argument. If the argument evaluates to false, the assertion passes; otherwise, it fails.
To strictly compare against boolean false, use [`assert.false()`](false).
Examples
--------
```
QUnit.test('example', assert => {
// success
assert.notOk(false, 'boolean false');
assert.notOk('', 'empty string');
assert.notOk(0, 'number zero');
assert.notOk(NaN, 'NaN value');
assert.notOk(null, 'null value');
assert.notOk(undefined, 'undefined value');
// failure
assert.notOk('foo', 'non-empty string');
assert.notOk(true, 'boolean true');
assert.notOk(1, 'number one');
});
```
qunit assert.deepEqual() assert.deepEqual()
==================
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
`deepEqual( actual, expected, message = "" )`
A recursive and strict comparison, considering all own and inherited properties.
| name | description |
| --- | --- |
| `actual` | Expression being tested |
| `expected` | Known comparison value |
| `message` (string) | Short description of the actual expression |
This assertion compares the full objects as passed. For primitive values, a strict comparison is performed. For objects, the object identity is disregarded and instead a recursive comparison of all own and inherited properties is used. This means arrays, plain objects, and arbitrary class instance objects can all be compared in this way.
The deep comparison includes built-in support for Date objects, regular expressions (RegExp), NaN, as well as ES6 features such as Symbol, Set, and Map objects.
To assert strict equality on own properties only, refer to [`assert.propEqual()`](propequal) instead.
[`assert.notDeepEqual()`](notdeepequal) can be used to check for inequality instead.
Examples
--------
Validate the properties and values of a given object.
```
QUnit.test('passing example', assert => {
const result = { foo: 'bar' };
assert.deepEqual(result, { foo: 'bar' });
});
```
```
QUnit.test('failing example', assert => {
const result = {
a: 'Albert',
b: 'Berta',
num: 123
};
// fails because the number 123 is not strictly equal to the string "123".
assert.deepEqual(result, {
a: 'Albert',
b: 'Berta',
num: '123'
});
});
```
qunit assert.notEqual() assert.notEqual()
=================
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
`notEqual( actual, expected, message = "" )`
A loose inequality comparison, checking for non-strict differences between two values.
| name | description |
| --- | --- |
| `actual` | Expression being tested |
| `expected` | Known comparison value |
| `message` (string) | Short description |
The `notEqual` assertion uses the simple inverted comparison operator (`!=`) to compare the actual and expected values. When they aren’t equal, the assertion passes; otherwise, it fails. When it fails, both actual and expected values are displayed in the test result, in addition to a given message.
[`assert.equal()`](equal) can be used to test equality.
[`assert.notStrictEqual()`](notstrictequal) can be used to test strict inequality.
Examples
--------
The simplest assertion example:
```
QUnit.test('passing example', assert => {
const result = '2';
// succeeds, 1 and 2 are different.
assert.notEqual(result, 1);
});
QUnit.test('failing example', assert => {
const result = '2';
// fails, the number 2 and the string "2" are considered equal when
// compared loosely. Use `assert.notStrictEqual` to consider them different.
assert.notEqual(result, 2);
});
```
qunit assert.strictEqual() assert.strictEqual()
====================
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
`strictEqual( actual, expected, message = "" )`
A strict type and value comparison.
| name | description |
| --- | --- |
| `actual` | Expression being tested |
| `expected` | Known comparison value |
| `message` (string) | Short description of the actual expression |
The `strictEqual()` assertion provides the most rigid comparison of type and value with the strict equality operator (`===`).
[`assert.equal()`](equal) can be used to test non-strict equality.
[`assert.notStrictEqual()`](notstrictequal) can be used to explicitly test strict inequality.
Changelog
---------
* Prior to QUnit 1.1, this method was known as `assert.same()`.
The alias was removed in QUnit 1.3.
Examples
--------
Compare the value of two primitives, having the same value and type.
```
QUnit.test('strictEqual example', assert => {
const result = 2;
assert.strictEqual(result, 2);
});
```
qunit assert.rejects() assert.rejects()
================
version added: [2.5.0](https://github.com/qunitjs/qunit/releases/tag/2.5.0)
Description
-----------
`rejects( promise, message = "" )`
`rejects( promise, expectedMatcher, message = "" )`
Test if the provided promise rejects, and optionally compare the rejection value.
| name | description |
| --- | --- |
| `promise` (thenable) | Promise to test for rejection |
| `expectedMatcher` | Rejection value matcher |
| `message` (string) | Short description of the assertion |
When testing code that is expected to return a rejected promise based on a specific set of circumstances, use `assert.rejects()` for testing and comparison.
The `expectedMatcher` argument can be:
* A function that returns `true` when the assertion should be considered passing.
* An Error object.
* A base constructor to use ala `rejectionValue instanceof expectedMatcher`.
* A RegExp that matches (or partially matches) `rejectionValue.toString()`.
Note: in order to avoid confusion between the `message` and the `expectedMatcher`, the `expectedMatcher` **can not** be a string.
Examples
--------
```
QUnit.test('rejects example', assert => {
// simple check
assert.rejects(Promise.reject('some error'));
// simple check
assert.rejects(
Promise.reject('some error'),
'optional description here'
);
// match pattern on actual error
assert.rejects(
Promise.reject(new Error('some error')),
/some error/,
'optional description here'
);
// Using a custom error constructor
function CustomError (message) {
this.message = message;
}
CustomError.prototype.toString = function () {
return this.message;
};
// actual error is an instance of the expected constructor
assert.rejects(
Promise.reject(new CustomError('some error')),
CustomError
);
// actual error has strictly equal `constructor`, `name` and `message` properties
// of the expected error object
assert.rejects(
Promise.reject(new CustomError('some error')),
new CustomError('some error')
);
// custom validation arrow function
assert.rejects(
Promise.reject(new CustomError('some error')),
(err) => err.toString() === 'some error'
);
// custom validation function
assert.rejects(
Promise.reject(new CustomError('some error')),
function (err) {
return err.toString() === 'some error';
}
);
});
```
The `assert.rejects()` method returns a `Promise` which handles the (often asynchronous) resolution and rejection logic for test successes and failures. It is not required to `await` the returned value, since QUnit internally handles the async control for you and waits for a settled state. However, if your test code requires a consistent and more isolated state between `rejects` calls, then this should be explicitly awaited to hold back the next statements.
```
QUnit.test('stateful rejects example', async assert => {
let value;
// asynchronously resolve if value < 5, and reject otherwise
function asyncChecker () {
return new Promise((resolve, reject) => {
setTimeout(() => {
if (value < 5) {
resolve();
} else {
reject('bad value: ' + value);
}
}, 10);
});
}
value = 8;
await assert.rejects(asyncChecker(), /bad value: 8/);
// if the above was not awaited, then the next line would change the value
// before the previous assertion could occur, and would cause a test failure
value = Infinity;
await assert.rejects(asyncChecker(), /bad value: Infinity/);
});
```
qunit assert.notDeepEqual() assert.notDeepEqual()
=====================
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
`notDeepEqual( actual, expected, message = "" )`
An inverted deep equal comparison.
| name | description |
| --- | --- |
| `actual` | Expression being tested |
| `expected` | Known comparison value |
| `message` (string) | Short description |
This assertion fails if the actual and expected values are recursively equal by strict comparison, considering both own and inherited properties.
The assertion passes if there are structural differences, type differences, or even a subtle difference in a particular property value.
This is the inverse of [`assert.deepEqual()`](deepequal).
Examples
--------
Compare the value of two objects.
```
QUnit.test('example', assert => {
const result = { foo: 'yep' };
// succeeds, objects are similar but have a different foo value.
assert.notDeepEqual(result, { foo: 'nope' });
});
```
qunit assert.step() assert.step()
=============
version added: [2.2.0](https://github.com/qunitjs/qunit/releases/tag/2.2.0)
Description
-----------
`step( value )`
Record a step for later verification.
| name | description |
| --- | --- |
| `value` (string) | Relevant string value, or short description, to mark this step. |
This assertion registers a passing assertion with the provided string. This and any other steps should be verified later in the test via [`assert.verifySteps()`](verifysteps).
The Step API provides an easy way to verify execution logic to a high degree of accuracy and precision, whether for asynchronous code, event-driven code, or callback-driven code.
Examples
--------
```
QUnit.test('example', function (assert) {
var maker = new WordMaker();
maker.on('start', () => {
assert.step('start');
});
maker.on('data', (word) => {
assert.step(word);
});
maker.on('end', () => {
assert.step('end');
});
maker.process('3.1');
assert.verifySteps([ 'start', '3', 'point', '1', 'end' ]);
});
```
*Note: See [`assert.verifySteps()`](verifysteps) for more detailed examples.*
qunit assert.equal() assert.equal()
==============
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
`equal( actual, expected, message = "" )`
A non-strict comparison of two values.
| name | description |
| --- | --- |
| `actual` | Expression being tested |
| `expected` | Known comparison value |
| `message` (string) | Short description of the actual expression |
The `equal` assertion uses the simple comparison operator (`==`) to compare the actual and expected arguments. When they are equal, the assertion passes; otherwise, it fails. When it fails, both actual and expected values are displayed in the test result, in addition to a given message.
This method is similar to the `assertEquals()` method found in xUnit-style frameworks.
To explicitly test inequality, use [`assert.notEqual()`](notequal).
To test for strict equality, use [`assert.strictEqual()`](strictequal).
Changelog
---------
* Prior to QUnit 1.1, this method was known as `assert.equals`.
The alias was removed in QUnit 1.3.
Examples
--------
The simplest assertion example:
```
QUnit.test('a test', function (assert) {
assert.equal(1, '1', "String '1' and number 1 have the same value");
});
```
A slightly more thorough set of assertions:
```
QUnit.test('equal test', function (assert) {
assert.equal(0, 0, 'Zero, Zero; equal succeeds');
assert.equal('', 0, 'Empty, Zero; equal succeeds');
assert.equal('', '', 'Empty, Empty; equal succeeds');
assert.equal('three', 3, 'Three, 3; equal fails');
assert.equal(null, false, 'null, false; equal fails');
});
```
qunit assert.ok() assert.ok()
===========
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
`ok( state, message = "" )`
A boolean check that passes when the first argument is truthy.
| name | description |
| --- | --- |
| `state` | Expression being tested |
| `message` (string) | Short description |
This assertion requires only one argument. If the argument evaluates to true, the assertion passes; otherwise, it fails.
To strictly compare against boolean true, use [`assert.true()`](true).
For the inverse of `ok()`, refer to [`assert.notOk()`](notok)
Examples
--------
```
QUnit.test('example', assert => {
// success
assert.ok(true, 'boolean true');
assert.ok('foo', 'non-empty string');
assert.ok(1, 'number one');
// failure
assert.ok(false, 'boolean false');
assert.ok('', 'empty string');
assert.ok(0, 'number zero');
assert.ok(NaN, 'NaN value');
assert.ok(null, 'null value');
assert.ok(undefined, 'undefined value');
});
```
qunit assert.throws() assert.throws()
===============
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
`throws( blockFn, message = "" )`
`throws( blockFn, expectedMatcher, message = "" )`
Test if a callback throws an exception, and optionally compare the thrown error.
| name | description |
| --- | --- |
| `blockFn` (function) | Function to execute |
| `expectedMatcher` | Expected error matcher |
| `message` (string) | Short description of the assertion |
When testing code that is expected to throw an exception based on a specific set of circumstances, use `assert.throws()` to catch the error object for testing and comparison.
The `expectedMatcher` argument can be:
* An Error object.
* An Error constructor to use ala `errorValue instanceof expectedMatcher`.
* A RegExp that matches (or partially matches) the string representation.
* A callback Function that must return `true` to pass the assertion check.
In very few environments, like Closure Compiler, `throws` may cause an error. There you can use `assert.raises()`. It has the same signature and behaviour, just a different name.
Changelog
---------
| | |
| --- | --- |
| [QUnit 2.12](https://github.com/qunitjs/qunit/releases/tag/2.12.0) | Added support for arrow functions as `expectedMatcher` callback function. |
| [QUnit 1.9](https://github.com/qunitjs/qunit/releases/tag/v1.9.0) | `assert.raises()` was renamed to `assert.throws()`.The `assert.raises()` method remains supported as an alias. |
Examples
--------
```
QUnit.test('throws example', assert => {
// simple check
assert.throws(function () {
throw new Error('boo');
});
// simple check
assert.throws(
function () {
throw new Error('boo');
},
'optional description here'
);
// match pattern on actual error
assert.throws(
function () {
throw new Error('some error');
},
/some error/,
'optional description here'
);
// using a custom error constructor
function CustomError (message) {
this.message = message;
}
CustomError.prototype.toString = function () {
return this.message;
};
// actual error is an instance of the expected constructor
assert.throws(
function () {
throw new CustomError('some error');
},
CustomError
);
// actual error has strictly equal `constructor`, `name` and `message` properties
// of the expected error object
assert.throws(
function () {
throw new CustomError('some error');
},
new CustomError('some error')
);
// custom validation arrow function
assert.throws(
function () {
throw new CustomError('some error');
},
(err) => err.toString() === 'some error'
);
// custom validation function
assert.throws(
function () {
throw new CustomError('some error');
},
function (err) {
return err.toString() === 'some error';
}
);
});
```
qunit QUnit.test.each() QUnit.test.each()
=================
version added: [2.16.0](https://github.com/qunitjs/qunit/releases/tag/2.16.0)
Description
-----------
`QUnit.test.each( name, dataset, callback )`
`QUnit.test.only.each( name, dataset, callback )`
`QUnit.test.skip.each( name, dataset, callback )`
`QUnit.test.todo.each( name, dataset, callback )`
Add tests using a data provider.
| parameter | description |
| --- | --- |
| `name` (string) | Title of unit being tested |
| `dataset` (array or object) | Array or object of data values passed to each test case |
| `callback` (function) | Function that performs the test |
### Callback parameters
| parameter | description |
| --- | --- |
| `assert` (object) | A new instance object with the [assertion methods](https://api.qunitjs.com/assert/) |
| `data` (any) | Data value |
Use this method to add multiple tests that are similar, but with different data passed in.
`QUnit.test.each()` generates multiple calls to [`QUnit.test()`](test) internally, and has all the same capabilities such support for async functions, returning a Promise, and the `assert` argument.
Each test case is passed one value of your dataset.
The [`only`](test.only), [`skip`](test.skip), and [`todo`](test.todo) variants are also available, as `QUnit.test.only.each`, `QUnit.test.skip.each`, and `QUnit.test.todo.each` respectively.
Examples
--------
### Basic data provider
```
function isEven (x) {
return x % 2 === 0;
}
QUnit.test.each('isEven()', [2, 4, 6], (assert, data) => {
assert.true(isEven(data), `${data} is even`);
});
```
### Array data provider
The original array is passed to your callback. [Array destructuring](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment) can be used to unpack the data array, directly from the callback signature.
```
function square (x) {
return x * x;
}
QUnit.test.each('square()', [
[2, 4],
[3, 9]
], (assert, [value, expected]) => {
assert.equal(square(value), expected, `${value} squared`);
});
```
### Object data provider
```
QUnit.test.each('isEven()', {
caseEven: [2, true],
caseNotEven: [3, false]
}, (assert, [value, expected]) => {
assert.strictEqual(isEven(value), expected);
});
```
### Async functions with `each()`
```
function isEven (x) {
return x % 2 === 0;
}
async function isAsyncEven (x) {
return isEven(x);
}
QUnit.test.each('isAsyncEven()', [2, 4], async (assert, data) => {
assert.true(await isAsyncEven(data), `${data} is even`);
});
```
Or in classic ES5 syntax, by returning a Promise from each callback:
```
function isEven (x) {
return x % 2 === 0;
}
function isAsyncEven (x) {
return Promise.resolve(isEven(x));
}
QUnit.test.each('isAsyncEven()', [2, 4], function (assert, data) {
return isAsyncEven(data).then(function (result) {
assert.true(result, data + ' is even');
});
});
```
| programming_docs |
qunit QUnit.module() QUnit.module()
==============
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
`QUnit.module( name )`
`QUnit.module( name, nested )`
`QUnit.module( name, options )`
`QUnit.module( name, options, nested )`
Group related tests under a common label.
| parameter | description |
| --- | --- |
| `name` (string) | Label for this group of tests. |
| [`options`](#options-object) (object) | Set hook callbacks. |
| [`nested`](#nested-scope) (function) | A scope to create nested modules and/or add hooks functionally. |
All tests inside a module will be grouped under that module. Tests can be added to a module using the [QUnit.test](test) method. Modules help organize, select, and filter tests to run. See [§ Organizing your tests](#organizing-your-tests).
Modules can be nested inside other modules. In the output, tests are generally prefixed by the names of all parent modules. E.g. “Grandparent > Parent > Child > my test”. See [§ Nested module scope](#nested-module-scope).
The `QUnit.module.only()`, `QUnit.module.skip()`, and `QUnit.module.todo()` methods are aliases for `QUnit.module()` that apply the behaviour of [`QUnit.test.only()`](test.only), [`QUnit.test.skip()`](test.skip) or [`QUnit.test.todo()`](test.todo) to all a module’s tests at once.
### Hooks
You can use hooks to prepare fixtures, or run other setup and teardown logic. Hooks can run around individual tests, or around a whole module.
* `before`: Run a callback before the first test.
* `beforeEach`: Run a callback before each test.
* `afterEach`: Run a callback after each test.
* `after`: Run a callback after the last test.
You can add hooks via the `hooks` parameter of a [scoped module](#nested-scope), or in the module [`options`](#options-object) object, or globally for all tests via [QUnit.hooks](hooks).
Hooks that are added to a module, will also apply to tests in any nested modules.
Hooks that run *before* a test, are ordered from outer-most to inner-most, in the order that they are added. This means that a test will first run any global beforeEach hooks, then the hooks of parent modules, and finally the hooks added to the immediate module the test is a part of. Hooks that run *after* a test, are ordered from inner-most to outer-most, in the reverse order. In other words, `before` and `beforeEach` callbacks form a [queue](https://en.wikipedia.org/wiki/Queue_%28abstract_data_type%29), while `afterEach` and `after` form a [stack](https://en.wikipedia.org/wiki/Stack_%28abstract_data_type%29).
#### Hook callback
A hook callback may be an async function, and may return a Promise or any other then-able. QUnit will automatically wait for your hook’s asynchronous work to finish before continuing to execute the tests.
Each hook has access to the same `assert` object, and test context via `this`, as the [QUnit.test](test) that the hook is running for. Example: [§ Using the test context](#using-the-test-context).
| parameter | description |
| --- | --- |
| `assert` (object) | An [Assert](https://api.qunitjs.com/assert/) object. |
It is discouraged to dynamically create a new [QUnit.test](test) from within a hook. In order to satisfy the requirement for the `after` hook to only run once and to be the last hook in a module, QUnit may associate dynamically defined tests with the parent module instead, or as global test. It is recommended to define any dynamic tests via [`QUnit.begin()`](../callbacks/qunit.begin).
### Options object
You can use the options object to add [hooks](#hooks).
| name | description |
| --- | --- |
| `before` (function) | Runs before the first test. |
| `beforeEach` (function) | Runs before each test. |
| `afterEach` (function) | Runs after each test. |
| `after` (function) | Runs after the last test. |
Properties on the module options object are copied over to the test context object at the start of each test. Such properties can also be changed from the hook callbacks. See [§ Using the test context](#using-the-test-context).
Example: [§ Declaring module options](#declaring-module-options).
### Nested scope
Modules can be nested to group tests under a common label within a parent module.
The module scope is given a `hooks` object which can be used to procedurally add [hooks](#hooks).
| parameter | description |
| --- | --- |
| `hooks` (object) | An object for adding hooks. |
Example: [§ Nested module scope](#nested-module-scope).
Changelog
---------
| | |
| --- | --- |
| [QUnit 2.4](https://github.com/qunitjs/qunit/releases/tag/2.4.0) | The `QUnit.module.only()`, `QUnit.module.skip()`, and `QUnit.module.todo()` aliases were introduced. |
| [QUnit 2.0](https://github.com/qunitjs/qunit/releases/tag/2.0.0) | The `before` and `after` options were introduced. |
| [QUnit 1.20](https://github.com/qunitjs/qunit/releases/tag/1.20.0) | The `nested` scope feature was introduced. |
| [QUnit 1.16](https://github.com/qunitjs/qunit/releases/tag/1.16.0) | The `beforeEach` and `afterEach` options were introduced.The `setup` and `teardown` options were deprecated in QUnit 1.16 and removed in QUnit 2.0. |
Examples
--------
### Organizing your tests
If `QUnit.module` is defined without a `nested` callback argument, all subsequently defined tests will be grouped into the module until another module is defined.
```
QUnit.module('Group A');
QUnit.test('basic test example 1', function (assert) {
assert.true(true, 'this is fine');
});
QUnit.test('basic test example 2', function (assert) {
assert.true(true, 'this is also fine');
});
QUnit.module('Group B');
QUnit.test('basic test example 3', function (assert) {
assert.true(true, 'this is fine');
});
QUnit.test('basic test example 4', function (assert) {
assert.true(true, 'this is also fine');
});
```
Using modern syntax:
```
const { test } = QUnit;
QUnit.module('Group A');
test('basic test example', assert => {
assert.true(true, 'this is fine');
});
test('basic test example 2', assert => {
assert.true(true, 'this is also fine');
});
QUnit.module('Group B');
test('basic test example 3', assert => {
assert.true(true, 'this is fine');
});
test('basic test example 4', assert => {
assert.true(true, 'this is also fine');
});
```
### Declaring module options
```
QUnit.module('module A', {
before: function () {
// prepare something once for all tests
},
beforeEach: function () {
// prepare something before each test
},
afterEach: function () {
// clean up after each test
},
after: function () {
// clean up once after all tests are done
}
});
```
### Nested module scope
```
const { test } = QUnit;
QUnit.module('Group A', hooks => {
test('basic test example', assert => {
assert.true(true, 'this is fine');
});
test('basic test example 2', assert => {
assert.true(true, 'this is also fine');
});
});
QUnit.module('Group B', hooks => {
test('basic test example 3', assert => {
assert.true(true, 'this is fine');
});
test('basic test example 4', assert => {
assert.true(true, 'this is also fine');
});
});
```
### Hooks on nested modules
Use `before`/`beforeEach` hooks are queued for nested modules. `after`/`afterEach` hooks are stacked on nested modules.
```
const { test } = QUnit;
QUnit.module('My Group', hooks => {
// It is valid to call the same hook methods more than once.
hooks.beforeEach(assert => {
assert.ok(true, 'beforeEach called');
});
hooks.afterEach(assert => {
assert.ok(true, 'afterEach called');
});
test('with hooks', assert => {
// 1 x beforeEach
// 1 x afterEach
assert.expect(2);
});
QUnit.module('Nested Group', hooks => {
// This will run after the parent module's beforeEach hook
hooks.beforeEach(assert => {
assert.ok(true, 'nested beforeEach called');
});
// This will run before the parent module's afterEach
hooks.afterEach(assert => {
assert.ok(true, 'nested afterEach called');
});
test('with nested hooks', assert => {
// 2 x beforeEach (parent, current)
// 2 x afterEach (current, parent)
assert.expect(4);
});
});
});
```
### Using the test context
The test context object is exposed to hook callbacks.
```
QUnit.module('Machine Maker', {
beforeEach: function () {
this.maker = new Maker();
this.parts = ['wheels', 'motor', 'chassis'];
}
});
QUnit.test('makes a robot', function (assert) {
this.parts.push('arduino');
assert.equal(this.maker.build(this.parts), 'robot');
assert.deepEqual(this.maker.log, ['robot']);
});
QUnit.test('makes a car', function (assert) {
assert.equal(this.maker.build(this.parts), 'car');
this.maker.duplicate();
assert.deepEqual(this.maker.log, ['car', 'car']);
});
```
The test context is also available when using the nested scope. Beware that use of the `this` binding is not available in arrow functions.
```
const { test } = QUnit;
QUnit.module('Machine Maker', hooks => {
hooks.beforeEach(function () {
this.maker = new Maker();
this.parts = ['wheels', 'motor', 'chassis'];
});
test('makes a robot', function (assert) {
this.parts.push('arduino');
assert.equal(this.maker.build(this.parts), 'robot');
assert.deepEqual(this.maker.log, ['robot']);
});
test('makes a car', function (assert) {
assert.equal(this.maker.build(this.parts), 'car');
this.maker.duplicate();
assert.deepEqual(this.maker.log, ['car', 'car']);
});
});
```
It might be more convenient to use JavaScript’s own lexical scope instead:
```
const { test } = QUnit;
QUnit.module('Machine Maker', hooks => {
let maker;
let parts;
hooks.beforeEach(() => {
maker = new Maker();
parts = ['wheels', 'motor', 'chassis'];
});
test('makes a robot', assert => {
parts.push('arduino');
assert.equal(maker.build(parts), 'robot');
assert.deepEqual(maker.log, ['robot']);
});
test('makes a car', assert => {
assert.equal(maker.build(parts), 'car');
maker.duplicate();
assert.deepEqual(maker.log, ['car', 'car']);
});
});
```
### Module hook with Promise
An example of handling an asynchronous `then`able Promise result in hooks. This example uses an [ES6 Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) interface that is fulfilled after connecting to or disconnecting from database.
```
QUnit.module('Database connection', {
before: function () {
return new Promise(function (resolve, reject) {
DB.connect(function (err) {
if (err) {
reject(err);
} else {
resolve();
}
});
});
},
after: function () {
return new Promise(function (resolve, reject) {
DB.disconnect(function (err) {
if (err) {
reject(err);
} else {
resolve();
}
});
});
}
});
```
### Only run a subset of tests
Use `QUnit.module.only()` to treat an entire module’s tests as if they used [`QUnit.test.only`](test.only) instead of [`QUnit.test`](test).
```
QUnit.module('Robot', hooks => {
// ...
});
// Only execute this module when developing the feature,
// skipping tests from other modules.
QUnit.module.only('Android', hooks => {
let android;
hooks.beforeEach(() => {
android = new Android();
});
QUnit.test('Say hello', assert => {
assert.strictEqual(android.hello(), 'Hello, my name is AN-2178!');
});
QUnit.test('Basic conversation', assert => {
android.loadConversationData({
Hi: 'Hello',
"What's your name?": 'My name is AN-2178.',
'Nice to meet you!': 'Nice to meet you too!',
'...': '...'
});
assert.strictEqual(
android.answer("What's your name?"),
'My name is AN-2178.'
);
});
// ...
});
```
Use `QUnit.module.skip()` to treat an entire module’s tests as if they used [`QUnit.test.skip`](test.skip) instead of [`QUnit.test`](test).
```
QUnit.module('Robot', hooks => {
// ...
});
// Skip this module's tests.
// For example if the android tests are failing due to unsolved problems.
QUnit.module.skip('Android', hooks => {
let android;
hooks.beforeEach(() => {
android = new Android();
});
QUnit.test('Say hello', assert => {
assert.strictEqual(android.hello(), 'Hello, my name is AN-2178!');
});
QUnit.test('Basic conversation', assert => {
// ...
assert.strictEqual(
android.answer('Nice to meet you!'),
'Nice to meet you too!'
);
});
// ...
});
```
Use `QUnit.module.todo()` to denote a feature that is still under development, and is known to not yet be passing all its tests. This treats an entire module’s tests as if they used [`QUnit.test.todo`](test.todo) instead of [`QUnit.test`](test).
```
QUnit.module.todo('Robot', hooks => {
let robot;
hooks.beforeEach(() => {
robot = new Robot();
});
QUnit.test('Say', assert => {
// Currently, it returns undefined
assert.strictEqual(robot.say(), "I'm Robot FN-2187");
});
QUnit.test('Move arm', assert => {
// Move the arm to point (75, 80). Currently, each throws a NotImplementedError
robot.moveArmTo(75, 80);
assert.deepEqual(robot.getPosition(), { x: 75, y: 80 });
});
// ...
});
```
qunit QUnit.hooks QUnit.hooks
===========
version added: [2.18.0](https://github.com/qunitjs/qunit/releases/tag/2.18.0)
Description
-----------
`QUnit.hooks.beforeEach( callback )`
`QUnit.hooks.afterEach( callback )`
Register a global callback to run before or after each test.
| parameter | description |
| --- | --- |
| callback (function) | Callback to execute. Called with an [assert](https://api.qunitjs.com/assert/) argument. |
This is the equivalent of applying a `QUnit.module()` hook to all modules and all tests, including global tests that are not associated with any module.
Similar to module hooks, global hooks support async functions or returning a Promise, which will be waited for before QUnit continues executing tests. Each global hook also has access to the same `assert` object and test context as the [QUnit.test](test) that the hook is running for.
For more details about hooks, refer to [QUnit.module § Hooks](module#hooks).
Examples
--------
```
QUnit.hooks.beforeEach(function () {
this.app = new MyApp();
});
QUnit.hooks.afterEach(async function (assert) {
assert.deepEqual([], await this.app.getErrors(), 'MyApp errors');
MyApp.reset();
});
```
qunit QUnit.start() QUnit.start()
=============
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
`QUnit.start()`
Start the test runner manually, when [`QUnit.config.autostart`](../config/autostart) is `false`. For example, if you load test files with AMD, RequireJS, or ESM dynamic imports.
Note: See [`QUnit.config.autostart`](../config/autostart) for detailed examples of how to use this.
**Warning**: Prior to QUnit 1.16, this method was used for resuming an async `QUnit.start` function, as complement to `QUnit.stop()`. To resume asynchronous tests, use [`assert.async()`](../assert/async) instead.
qunit QUnit.test() QUnit.test()
============
version added: [1.0.0](https://github.com/qunitjs/qunit/releases/tag/1.0.0)
Description
-----------
`QUnit.test( name, callback )`
Define a test using `QUnit.test()`.
| parameter | description |
| --- | --- |
| `name` (string) | Title of unit being tested |
| `callback` (function) | Function that performs the test |
### Callback parameters
| parameter | description |
| --- | --- |
| `assert` (object) | An [Assert](https://api.qunitjs.com/assert/) object |
The `assert` argument to the callback contains all of QUnit’s [assertion methods](https://api.qunitjs.com/assert/). Use this to make your test assertions.
`QUnit.test()` can automatically handle the asynchronous resolution of a Promise on your behalf if you return a “then-able” Promise as the result of your callback function.
See also:
* [`QUnit.test.only()`](test.only)
* [`QUnit.test.skip()`](test.skip)
* [`QUnit.test.todo()`](test.todo)
Changelog
---------
| | |
| --- | --- |
| [QUnit 1.16](https://github.com/qunitjs/qunit/releases/tag/1.16.0) | Added support for async functions, and returning of a Promise. |
Examples
--------
### Example: Standard test
A practical example, using the assert argument.
```
function square (x) {
return x * x;
}
QUnit.test('square()', assert => {
assert.equal(square(2), 4, 'square(2)');
assert.equal(square(3), 9, 'square(3)');
});
```
### Example: Async test
Following the example above, `QUnit.test` also supports JS [async functions](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function) syntax out of the box.
```
QUnit.test('Test with async-await', async assert => {
const a = await fetchSquare(2);
const b = await fetchSquare(3);
assert.equal(a, 4);
assert.equal(b, 9);
assert.equal(await fetchSquare(5), 25);
});
```
### Example: Test with Promise
In ES5 and older environments, you can also return a [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) from your standard test function. This also supports other then-able, values such as `jQuery.Deferred`, and Bluebird Promise.
This example returns a Promise that is resolved after waiting for 1 second.
```
function fetchSquare (x) {
return new Promise(function (resolve) {
setTimeout(function () {
resolve(x * x);
}, 1000);
});
}
QUnit.test('Test with Promise', function (assert) {
return fetchSquare(3).then(function (result) {
assert.equal(result, 9);
});
});
```
qunit QUnit.test.only() QUnit.test.only()
=================
version added: [1.20.0](https://github.com/qunitjs/qunit/releases/tag/1.20.0)
Description
-----------
`QUnit.test.only( name, callback )`
`QUnit.only( name, callback )`
Add a test that is exclusively run, preventing other tests from running unless they are defined this way.
| parameter | description |
| --- | --- |
| `name` (string) | Title of unit being tested |
| `callback` (function) | Function that performs the test |
### Callback parameters
| parameter | description |
| --- | --- |
| `assert` (object) | A new instance object with the [assertion methods](https://api.qunitjs.com/assert/) |
Use this method to focus your test suite on specific tests. `QUnit.test.only` will cause any other tests in your suite to be ignored.
This method is an alternative to re-running individual tests from the HTML reporter interface, and can be especially useful as it can be done upfront without first running the test suite, e.g. in a codebase with many long-running tests.
It can also be used instead of the `--filter` CLI option, e.g. if you’re already having the test open in your text editor. Similar to how one might use the `debugger` keyword.
When debugging a larger area of code, you may want to *only* run all tests within a given module. You can also use[`QUnit.module.only()`](module) to automatically mark all tests in a module as “only” tests.
Changelog
---------
| | |
| --- | --- |
| [QUnit 2.12](https://github.com/qunitjs/qunit/releases/tag/2.12.0) | The `QUnit.only()` method was renamed to `QUnit.test.only()`.Use of `QUnit.only()` remains supported as an alias. |
| [QUnit 1.20](https://github.com/qunitjs/qunit/releases/tag/1.20.0) | The `QUnit.only()` method was introduced. |
Examples
--------
How to use `QUnit.test.only` to filter which tests are run.
```
QUnit.module('robot', hooks => {
let robot;
hooks.beforeEach(() => {
robot = new Robot();
});
QUnit.test('say()', assert => {
assert.true(robot.say('Hello'));
});
// Run only this test
// For example, you are working on changing this method.
QUnit.test.only('laser()', assert => {
assert.true(robot.laser());
});
QUnit.test('take()', assert => {
assert.true(robot.take(5));
});
});
```
| programming_docs |
qunit QUnit.test.skip() QUnit.test.skip()
=================
version added: [1.16.0](https://github.com/qunitjs/qunit/releases/tag/1.16.0)
Description
-----------
`QUnit.test.skip( name, callback )`
`QUnit.skip( name, callback )`
Add a test that will be skipped during the run.
| parameter | description |
| --- | --- |
| `name` (string) | Title of unit being tested |
| `callback` (function) | Function that performs the test |
Use this method to disable a [`QUnit.test()`](test), as alternative to commenting out the test.
This test will be listed in the results as a “skipped” test. The callback and the respective module’s hooks will not run.
As a codebase becomes bigger, you may sometimes want to temporarily disable an entire group of tests at once. You can use [`QUnit.module.skip()`](module) to recursively skip all tests in the same module.
Changelog
---------
| | |
| --- | --- |
| [QUnit 2.12](https://github.com/qunitjs/qunit/releases/tag/2.12.0) | The `QUnit.skip()` method was renamed to `QUnit.test.skip()`.Use of `QUnit.skip()` remains supported as an alias. |
| [QUnit 1.16](https://github.com/qunitjs/qunit/releases/tag/1.16.0) | The `QUnit.skip()` method was introduced. |
Examples
--------
How to use `skip` as a placeholder for future or temporarily broken tests.
```
QUnit.module('robot', hooks => {
let robot;
hooks.beforeEach(() => {
robot = new Robot();
});
QUnit.test('say', assert => {
assert.strictEqual(robot.say(), 'Exterminate!');
});
// Robot does not yet have a laser() method yet, skip this test for now
QUnit.test.skip('laser', assert => {
assert.true(robot.laser());
});
});
```
qunit QUnit.test.todo() QUnit.test.todo()
=================
version added: [2.2.0](https://github.com/qunitjs/qunit/releases/tag/2.2.0)
Description
-----------
`QUnit.test.todo( name, callback )`
`QUnit.todo( name, callback )`
Add a test which expects at least one failing assertion or exception during its run.
| parameter | description |
| --- | --- |
| `name` (string) | Title of unit being tested |
| `callback` (function) | Function that performs the test |
### Callback parameters
| parameter | description |
| --- | --- |
| `assert` (object) | A new instance object with the [assertion methods](https://api.qunitjs.com/assert/) |
Use this method to test a unit of code that is still under development (in a “todo” state). The “todo” test will pass as long as there is at least one assertion still failing, or if an exception is thrown.
When all assertions are passing, the “todo” test will fail, thus signaling that `QUnit.test.todo()` should be changed to [`QUnit.test()`](test).
You can also use [`QUnit.module.todo()`](module) to manage the “todo” state for all tests within a module at once.
Changelog
---------
| | |
| --- | --- |
| [QUnit 2.12](https://github.com/qunitjs/qunit/releases/tag/2.12.0) | The `QUnit.todo()` method was renamed to `QUnit.test.todo()`.Use of `QUnit.todo()` remains supported as an alias. |
| [QUnit 2.2](https://github.com/qunitjs/qunit/releases/tag/2.2.0) | The `QUnit.todo()` method was introduced. |
Examples
--------
How to use `QUnit.test.todo` to denote code that is still under development.
```
QUnit.module('Robot', hooks => {
let robot;
hooks.beforeEach(() => {
robot = new Robot();
});
// Robot is not yet finished, expect this is a todo test
QUnit.test.todo('fireLazer', assert => {
const result = robot.fireLazer();
assert.equal(result, "I'm firing my lazer!");
});
});
```
r Package Index Package Index
=============
---
### Packages in the standard library
| | |
| --- | --- |
| [base](/r-base/) | The R Base Package |
| [boot](/r-boot/) | Bootstrap Functions (Originally by Angelo Canty for S) |
| [class](/r-class/) | Functions for Classification |
| [cluster](/r-cluster/) | "Finding Groups in Data": Cluster Analysis Extended Rousseeuw et al. |
| [codetools](/r-codetools/) | Code Analysis Tools for R |
| [compiler](/r-compiler/) | The R Compiler Package |
| [datasets](/r-datasets/) | The R Datasets Package |
| [foreign](/r-foreign/) | Read Data Stored by 'Minitab', 'S', 'SAS', 'SPSS', 'Stata', 'Systat', 'Weka', 'dBase', ... |
| [graphics](/r-graphics/) | The R Graphics Package |
| [grDevices](/r-grdevices/) | The R Graphics Devices and Support for Colours and Fonts |
| [grid](/r-grid/) | The Grid Graphics Package |
| [KernSmooth](/r-kernsmooth/) | Functions for Kernel Smoothing Supporting Wand & Jones (1995) |
| [lattice](/r-lattice/) | Trellis Graphics for R |
| [MASS](/r-mass/) | Support Functions and Datasets for Venables and Ripley's MASS |
| [Matrix](/r-matrix/) | Sparse and Dense Matrix Classes and Methods |
| [methods](/r-methods/) | Formal Methods and Classes |
| [mgcv](/r-mgcv/) | Mixed GAM Computation Vehicle with Automatic Smoothness Estimation |
| [nlme](/r-nlme/) | Linear and Nonlinear Mixed Effects Models |
| [nnet](/r-nnet/) | Feed-Forward Neural Networks and Multinomial Log-Linear Models |
| [parallel](/r-parallel/) | Support for Parallel computation in R |
| [rpart](/r-rpart/) | Recursive Partitioning and Regression Trees |
| [spatial](/r-spatial/) | Functions for Kriging and Point Pattern Analysis |
| [splines](/r-splines/) | Regression Spline Functions and Classes |
| [stats](/r-stats/) | The R Stats Package |
| [stats4](/r-stats4/) | Statistical Functions using S4 Classes |
| [survival](/r-survival/) | Survival Analysis |
| [tcltk](/r-tcltk/) | Tcl/Tk Interface |
| [tools](/r-tools/) | Tools for Package Development |
| [utils](/r-utils/) | The R Utils Package |
r None
`Rabbit` Blood Pressure in Rabbits
-----------------------------------
### Description
Five rabbits were studied on two occasions, after treatment with saline (control) and after treatment with the *5-HT\_3* antagonist MDL 72222. After each treatment ascending doses of phenylbiguanide were injected intravenously at 10 minute intervals and the responses of mean blood pressure measured. The goal was to test whether the cardiogenic chemoreflex elicited by phenylbiguanide depends on the activation of *5-HT\_3* receptors.
### Usage
```
Rabbit
```
### Format
This data frame contains 60 rows and the following variables:
`BPchange`
change in blood pressure relative to the start of the experiment.
`Dose`
dose of Phenylbiguanide in micrograms.
`Run`
label of run (`"C1"` to `"C5"`, then `"M1"` to `"M5"`).
`Treatment`
placebo or the *5-HT\_3* antagonist MDL 72222.
`Animal`
label of animal used (`"R1"` to `"R5"`).
### Source
J. Ludbrook (1994) Repeated measurements and multiple comparisons in cardiovascular research. *Cardiovascular Research* **28**, 303–311.
[The numerical data are not in the paper but were supplied by Professor Ludbrook]
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
r None
`summary.negbin` Summary Method Function for Objects of Class 'negbin'
-----------------------------------------------------------------------
### Description
Identical to `summary.glm`, but with three lines of additional output: the ML estimate of theta, its standard error, and twice the log-likelihood function.
### Usage
```
## S3 method for class 'negbin'
summary(object, dispersion = 1, correlation = FALSE, ...)
```
### Arguments
| | |
| --- | --- |
| `object` | fitted model object of class `negbin` inheriting from `glm` and `lm`. Typically the output of `glm.nb`. |
| `dispersion` | as for `summary.glm`, with a default of 1. |
| `correlation` | as for `summary.glm`. |
| `...` | arguments passed to or from other methods. |
### Details
`summary.glm` is used to produce the majority of the output and supply the result. This function is a method for the generic function `summary()` for class `"negbin"`. It can be invoked by calling `summary(x)` for an object `x` of the appropriate class, or directly by calling `summary.negbin(x)` regardless of the class of the object.
### Value
As for `summary.glm`; the additional lines of output are not included in the resultant object.
### Side Effects
A summary table is produced as for `summary.glm`, with the additional information described above.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`[summary](../../base/html/summary)`, `<glm.nb>`, `<negative.binomial>`, `<anova.negbin>`
### Examples
```
summary(glm.nb(Days ~ Eth*Age*Lrn*Sex, quine, link = log))
```
r None
`glm.nb` Fit a Negative Binomial Generalized Linear Model
----------------------------------------------------------
### Description
A modification of the system function `[glm](../../stats/html/glm)()` to include estimation of the additional parameter, `theta`, for a Negative Binomial generalized linear model.
### Usage
```
glm.nb(formula, data, weights, subset, na.action,
start = NULL, etastart, mustart,
control = glm.control(...), method = "glm.fit",
model = TRUE, x = FALSE, y = TRUE, contrasts = NULL, ...,
init.theta, link = log)
```
### Arguments
| | |
| --- | --- |
| `formula, data, weights, subset, na.action, start, etastart,
mustart, control, method, model, x, y, contrasts, ...` | arguments for the `[glm](../../stats/html/glm)()` function. Note that these exclude `family` and `offset` (but `[offset](../../stats/html/offset)()` can be used). |
| `init.theta` | Optional initial value for the theta parameter. If omitted a moment estimator after an initial fit using a Poisson GLM is used. |
| `link` | The link function. Currently must be one of `log`, `sqrt` or `identity`. |
### Details
An alternating iteration process is used. For given `theta` the GLM is fitted using the same process as used by `glm()`. For fixed means the `theta` parameter is estimated using score and information iterations. The two are alternated until convergence of both. (The number of alternations and the number of iterations when estimating `theta` are controlled by the `maxit` parameter of `glm.control`.)
Setting `trace > 0` traces the alternating iteration process. Setting `trace > 1` traces the `glm` fit, and setting `trace > 2` traces the estimation of `theta`.
### Value
A fitted model object of class `negbin` inheriting from `glm` and `lm`. The object is like the output of `glm` but contains three additional components, namely `theta` for the ML estimate of theta, `SE.theta` for its approximate standard error (using observed rather than expected information), and `twologlik` for twice the log-likelihood function.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`[glm](../../stats/html/glm)`, `<negative.binomial>`, `<anova.negbin>`, `<summary.negbin>`, `<theta.md>`
There is a `[simulate](../../stats/html/simulate)` method.
### Examples
```
quine.nb1 <- glm.nb(Days ~ Sex/(Age + Eth*Lrn), data = quine)
quine.nb2 <- update(quine.nb1, . ~ . + Sex:Age:Lrn)
quine.nb3 <- update(quine.nb2, Days ~ .^4)
anova(quine.nb1, quine.nb2, quine.nb3)
```
r None
`geyser` Old Faithful Geyser Data
----------------------------------
### Description
A version of the eruptions data from the ‘Old Faithful’ geyser in Yellowstone National Park, Wyoming. This version comes from Azzalini and Bowman (1990) and is of continuous measurement from August 1 to August 15, 1985.
Some nocturnal duration measurements were coded as 2, 3 or 4 minutes, having originally been described as ‘short’, ‘medium’ or ‘long’.
### Usage
```
geyser
```
### Format
A data frame with 299 observations on 2 variables.
| | | |
| --- | --- | --- |
| `duration` | numeric | Eruption time in mins |
| `waiting` | numeric | Waiting time for this eruption |
| |
### Note
The `waiting` time was incorrectly described as the time to the next eruption in the original files, and corrected for MASS version 7.3-30.
### References
Azzalini, A. and Bowman, A. W. (1990) A look at some data on the Old Faithful geyser. *Applied Statistics* **39**, 357–365.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`[faithful](../../datasets/html/faithful)`.
CRAN package sm.
r None
`abbey` Determinations of Nickel Content
-----------------------------------------
### Description
A numeric vector of 31 determinations of nickel content (ppm) in a Canadian syenite rock.
### Usage
```
abbey
```
### Source
S. Abbey (1988) *Geostandards Newsletter* **12**, 241.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
r None
`rnegbin` Simulate Negative Binomial Variates
----------------------------------------------
### Description
Function to generate random outcomes from a Negative Binomial distribution, with mean `mu` and variance `mu + mu^2/theta`.
### Usage
```
rnegbin(n, mu = n, theta = stop("'theta' must be specified"))
```
### Arguments
| | |
| --- | --- |
| `n` | If a scalar, the number of sample values required. If a vector, `length(n)` is the number required and `n` is used as the mean vector if `mu` is not specified. |
| `mu` | The vector of means. Short vectors are recycled. |
| `theta` | Vector of values of the `theta` parameter. Short vectors are recycled. |
### Details
The function uses the representation of the Negative Binomial distribution as a continuous mixture of Poisson distributions with Gamma distributed means. Unlike `rnbinom` the index can be arbitrary.
### Value
Vector of random Negative Binomial variate values.
### Side Effects
Changes `.Random.seed` in the usual way.
### Examples
```
# Negative Binomials with means fitted(fm) and theta = 4.5
fm <- glm.nb(Days ~ ., data = quine)
dummy <- rnegbin(fitted(fm), theta = 4.5)
```
r None
`motors` Accelerated Life Testing of Motorettes
------------------------------------------------
### Description
The `motors` data frame has 40 rows and 3 columns. It describes an accelerated life test at each of four temperatures of 10 motorettes, and has rather discrete times.
### Usage
```
motors
```
### Format
This data frame contains the following columns:
`temp`
the temperature (degrees C) of the test.
`time`
the time in hours to failure or censoring at 8064 hours (= 336 days).
`cens`
an indicator variable for death.
### Source
Kalbfleisch, J. D. and Prentice, R. L. (1980) *The Statistical Analysis of Failure Time Data.* New York: Wiley.
taken from
Nelson, W. D. and Hahn, G. J. (1972) Linear regression of a regression relationship from censored data. Part 1 – simple methods and their application. *Technometrics*, **14**, 247–276.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### Examples
```
library(survival)
plot(survfit(Surv(time, cens) ~ factor(temp), motors), conf.int = FALSE)
# fit Weibull model
motor.wei <- survreg(Surv(time, cens) ~ temp, motors)
summary(motor.wei)
# and predict at 130C
unlist(predict(motor.wei, data.frame(temp=130), se.fit = TRUE))
motor.cox <- coxph(Surv(time, cens) ~ temp, motors)
summary(motor.cox)
# predict at temperature 200
plot(survfit(motor.cox, newdata = data.frame(temp=200),
conf.type = "log-log"))
summary( survfit(motor.cox, newdata = data.frame(temp=130)) )
```
r None
`DDT` DDT in Kale
------------------
### Description
A numeric vector of 15 measurements by different laboratories of the pesticide DDT in kale, in ppm (parts per million) using the multiple pesticide residue measurement.
### Usage
```
DDT
```
### Source
C. E. Finsterwalder (1976) Collaborative study of an extension of the Mills *et al* method for the determination of pesticide residues in food. *J. Off. Anal. Chem.* **59**, 169–171
R. G. Staudte and S. J. Sheather (1990) *Robust Estimation and Testing.* Wiley
r None
`snails` Snail Mortality Data
------------------------------
### Description
Groups of 20 snails were held for periods of 1, 2, 3 or 4 weeks in carefully controlled conditions of temperature and relative humidity. There were two species of snail, A and B, and the experiment was designed as a 4 by 3 by 4 by 2 completely randomized design. At the end of the exposure time the snails were tested to see if they had survived; the process itself is fatal for the animals. The object of the exercise was to model the probability of survival in terms of the stimulus variables, and in particular to test for differences between species.
The data are unusual in that in most cases fatalities during the experiment were fairly small.
### Usage
```
snails
```
### Format
The data frame contains the following components:
`Species`
snail species A (`1`) or B (`2`).
`Exposure`
exposure in weeks.
`Rel.Hum`
relative humidity (4 levels).
`Temp`
temperature, in degrees Celsius (3 levels).
`Deaths`
number of deaths.
`N`
number of snails exposed.
### Source
Zoology Department, The University of Adelaide.
### References
Venables, W. N. and Ripley, B. D. (1999) *Modern Applied Statistics with S-PLUS.* Third Edition. Springer.
r None
`hubers` Huber Proposal 2 Robust Estimator of Location and/or Scale
--------------------------------------------------------------------
### Description
Finds the Huber M-estimator for location with scale specified, scale with location specified, or both if neither is specified.
### Usage
```
hubers(y, k = 1.5, mu, s, initmu = median(y), tol = 1e-06)
```
### Arguments
| | |
| --- | --- |
| `y` | vector y of data values |
| `k` | Winsorizes at `k` standard deviations |
| `mu` | specified location |
| `s` | specified scale |
| `initmu` | initial value of `mu` |
| `tol` | convergence tolerance |
### Value
list of location and scale estimates
| | |
| --- | --- |
| `mu` | location estimate |
| `s` | scale estimate |
### References
Huber, P. J. (1981) *Robust Statistics.* Wiley.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<huber>`
### Examples
```
hubers(chem)
hubers(chem, mu=3.68)
```
r None
`Boston` Housing Values in Suburbs of Boston
---------------------------------------------
### Description
The `Boston` data frame has 506 rows and 14 columns.
### Usage
```
Boston
```
### Format
This data frame contains the following columns:
`crim`
per capita crime rate by town.
`zn`
proportion of residential land zoned for lots over 25,000 sq.ft.
`indus`
proportion of non-retail business acres per town.
`chas`
Charles River dummy variable (= 1 if tract bounds river; 0 otherwise).
`nox`
nitrogen oxides concentration (parts per 10 million).
`rm`
average number of rooms per dwelling.
`age`
proportion of owner-occupied units built prior to 1940.
`dis`
weighted mean of distances to five Boston employment centres.
`rad`
index of accessibility to radial highways.
`tax`
full-value property-tax rate per \$10,000.
`ptratio`
pupil-teacher ratio by town.
`black`
*1000(Bk - 0.63)^2* where *Bk* is the proportion of blacks by town.
`lstat`
lower status of the population (percent).
`medv`
median value of owner-occupied homes in \$1000s.
### Source
Harrison, D. and Rubinfeld, D.L. (1978) Hedonic prices and the demand for clean air. *J. Environ. Economics and Management* **5**, 81–102.
Belsley D.A., Kuh, E. and Welsch, R.E. (1980) *Regression Diagnostics. Identifying Influential Data and Sources of Collinearity.* New York: Wiley.
r None
`beav1` Body Temperature Series of Beaver 1
--------------------------------------------
### Description
Reynolds (1994) describes a small part of a study of the long-term temperature dynamics of beaver *Castor canadensis* in north-central Wisconsin. Body temperature was measured by telemetry every 10 minutes for four females, but data from a one period of less than a day for each of two animals is used there.
### Usage
```
beav1
```
### Format
The `beav1` data frame has 114 rows and 4 columns. This data frame contains the following columns:
`day`
Day of observation (in days since the beginning of 1990), December 12–13.
`time`
Time of observation, in the form `0330` for 3.30am.
`temp`
Measured body temperature in degrees Celsius.
`activ`
Indicator of activity outside the retreat.
### Note
The observation at 22:20 is missing.
### Source
P. S. Reynolds (1994) Time-series analyses of beaver body temperatures. Chapter 11 of Lange, N., Ryan, L., Billard, L., Brillinger, D., Conquest, L. and Greenhouse, J. eds (1994) *Case Studies in Biometry.* New York: John Wiley and Sons.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<beav2>`
### Examples
```
beav1 <- within(beav1,
hours <- 24*(day-346) + trunc(time/100) + (time%%100)/60)
plot(beav1$hours, beav1$temp, type="l", xlab="time",
ylab="temperature", main="Beaver 1")
usr <- par("usr"); usr[3:4] <- c(-0.2, 8); par(usr=usr)
lines(beav1$hours, beav1$activ, type="s", lty=2)
temp <- ts(c(beav1$temp[1:82], NA, beav1$temp[83:114]),
start = 9.5, frequency = 6)
activ <- ts(c(beav1$activ[1:82], NA, beav1$activ[83:114]),
start = 9.5, frequency = 6)
acf(temp[1:53])
acf(temp[1:53], type = "partial")
ar(temp[1:53])
act <- c(rep(0, 10), activ)
X <- cbind(1, act = act[11:125], act1 = act[10:124],
act2 = act[9:123], act3 = act[8:122])
alpha <- 0.80
stemp <- as.vector(temp - alpha*lag(temp, -1))
sX <- X[-1, ] - alpha * X[-115,]
beav1.ls <- lm(stemp ~ -1 + sX, na.action = na.omit)
summary(beav1.ls, cor = FALSE)
rm(temp, activ)
```
| programming_docs |
r None
`rotifer` Numbers of Rotifers by Fluid Density
-----------------------------------------------
### Description
The data give the numbers of rotifers falling out of suspension for different fluid densities. There are two species, `pm` *Polyartha major* and `kc`, *Keratella cochlearis* and for each species the number falling out and the total number are given.
### Usage
```
rotifer
```
### Format
`density`
specific density of fluid.
`pm.y`
number falling out for *P. major*.
`pm.total`
total number of *P. major*.
`kc.y`
number falling out for *K. cochlearis*.
`kc.tot`
total number of *K. cochlearis*.
### Source
D. Collett (1991) *Modelling Binary Data.* Chapman & Hall. p. 217
r None
`anova.negbin` Likelihood Ratio Tests for Negative Binomial GLMs
-----------------------------------------------------------------
### Description
Method function to perform sequential likelihood ratio tests for Negative Binomial generalized linear models.
### Usage
```
## S3 method for class 'negbin'
anova(object, ..., test = "Chisq")
```
### Arguments
| | |
| --- | --- |
| `object` | Fitted model object of class `"negbin"`, inheriting from classes `"glm"` and `"lm"`, specifying a Negative Binomial fitted GLM. Typically the output of `<glm.nb>()`. |
| `...` | Zero or more additional fitted model objects of class `"negbin"`. They should form a nested sequence of models, but need not be specified in any particular order. |
| `test` | Argument to match the `test` argument of `[anova.glm](../../stats/html/anova.glm)`. Ignored (with a warning if changed) if a sequence of two or more Negative Binomial fitted model objects is specified, but possibly used if only one object is specified. |
### Details
This function is a method for the generic function `anova()` for class `"negbin"`. It can be invoked by calling `anova(x)` for an object `x` of the appropriate class, or directly by calling `anova.negbin(x)` regardless of the class of the object.
### Note
If only one fitted model object is specified, a sequential analysis of deviance table is given for the fitted model. The `theta` parameter is kept fixed. If more than one fitted model object is specified they must all be of class `"negbin"` and likelihood ratio tests are done of each model within the next. In this case `theta` is assumed to have been re-estimated for each model.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<glm.nb>`, `<negative.binomial>`, `<summary.negbin>`
### Examples
```
m1 <- glm.nb(Days ~ Eth*Age*Lrn*Sex, quine, link = log)
m2 <- update(m1, . ~ . - Eth:Age:Lrn:Sex)
anova(m2, m1)
anova(m2)
```
r None
`lm.ridge` Ridge Regression
----------------------------
### Description
Fit a linear model by ridge regression.
### Usage
```
lm.ridge(formula, data, subset, na.action, lambda = 0, model = FALSE,
x = FALSE, y = FALSE, contrasts = NULL, ...)
```
### Arguments
| | |
| --- | --- |
| `formula` | a formula expression as for regression models, of the form `response ~ predictors`. See the documentation of `formula` for other details. `[offset](../../stats/html/offset)` terms are allowed. |
| `data` | an optional data frame, list or environment in which to interpret the variables occurring in `formula`. |
| `subset` | expression saying which subset of the rows of the data should be used in the fit. All observations are included by default. |
| `na.action` | a function to filter missing data. |
| `lambda` | A scalar or vector of ridge constants. |
| `model` | should the model frame be returned? Not implemented. |
| `x` | should the design matrix be returned? Not implemented. |
| `y` | should the response be returned? Not implemented. |
| `contrasts` | a list of contrasts to be used for some or all of factor terms in the formula. See the `contrasts.arg` of `[model.matrix.default](../../stats/html/model.matrix)`. |
| `...` | additional arguments to `[lm.fit](../../stats/html/lmfit)`. |
### Details
If an intercept is present in the model, its coefficient is not penalized. (If you want to penalize an intercept, put in your own constant term and remove the intercept.)
### Value
A list with components
| | |
| --- | --- |
| `coef` | matrix of coefficients, one row for each value of `lambda`. Note that these are not on the original scale and are for use by the `[coef](../../stats/html/coef)` method. |
| `scales` | scalings used on the X matrix. |
| `Inter` | was intercept included? |
| `lambda` | vector of lambda values |
| `ym` | mean of `y` |
| `xm` | column means of `x` matrix |
| `GCV` | vector of GCV values |
| `kHKB` | HKB estimate of the ridge constant. |
| `kLW` | L-W estimate of the ridge constant. |
### References
Brown, P. J. (1994) *Measurement, Regression and Calibration* Oxford.
### See Also
`[lm](../../stats/html/lm)`
### Examples
```
longley # not the same as the S-PLUS dataset
names(longley)[1] <- "y"
lm.ridge(y ~ ., longley)
plot(lm.ridge(y ~ ., longley,
lambda = seq(0,0.1,0.001)))
select(lm.ridge(y ~ ., longley,
lambda = seq(0,0.1,0.0001)))
```
r None
`accdeaths` Accidental Deaths in the US 1973-1978
--------------------------------------------------
### Description
A regular time series giving the monthly totals of accidental deaths in the USA.
### Usage
```
accdeaths
```
### Details
The values for first six months of 1979 (p. 326) were `7798 7406 8363 8460 9217 9316`.
### Source
P. J. Brockwell and R. A. Davis (1991) *Time Series: Theory and Methods.* Springer, New York.
### References
Venables, W. N. and Ripley, B. D. (1999) *Modern Applied Statistics with S-PLUS.* Third Edition. Springer.
r None
`housing` Frequency Table from a Copenhagen Housing Conditions Survey
----------------------------------------------------------------------
### Description
The `housing` data frame has 72 rows and 5 variables.
### Usage
```
housing
```
### Format
`Sat`
Satisfaction of householders with their present housing circumstances, (High, Medium or Low, ordered factor).
`Infl`
Perceived degree of influence householders have on the management of the property (High, Medium, Low).
`Type`
Type of rental accommodation, (Tower, Atrium, Apartment, Terrace).
`Cont`
Contact residents are afforded with other residents, (Low, High).
`Freq`
Frequencies: the numbers of residents in each class.
### Source
Madsen, M. (1976) Statistical analysis of multiple contingency tables. Two examples. *Scand. J. Statist.* **3**, 97–106.
Cox, D. R. and Snell, E. J. (1984) *Applied Statistics, Principles and Examples*. Chapman & Hall.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### Examples
```
options(contrasts = c("contr.treatment", "contr.poly"))
# Surrogate Poisson models
house.glm0 <- glm(Freq ~ Infl*Type*Cont + Sat, family = poisson,
data = housing)
## IGNORE_RDIFF_BEGIN
summary(house.glm0, cor = FALSE)
## IGNORE_RDIFF_END
addterm(house.glm0, ~. + Sat:(Infl+Type+Cont), test = "Chisq")
house.glm1 <- update(house.glm0, . ~ . + Sat*(Infl+Type+Cont))
summary(house.glm1, cor = FALSE)
1 - pchisq(deviance(house.glm1), house.glm1$df.residual)
dropterm(house.glm1, test = "Chisq")
addterm(house.glm1, ~. + Sat:(Infl+Type+Cont)^2, test = "Chisq")
hnames <- lapply(housing[, -5], levels) # omit Freq
newData <- expand.grid(hnames)
newData$Sat <- ordered(newData$Sat)
house.pm <- predict(house.glm1, newData,
type = "response") # poisson means
house.pm <- matrix(house.pm, ncol = 3, byrow = TRUE,
dimnames = list(NULL, hnames[[1]]))
house.pr <- house.pm/drop(house.pm %*% rep(1, 3))
cbind(expand.grid(hnames[-1]), round(house.pr, 2))
# Iterative proportional scaling
loglm(Freq ~ Infl*Type*Cont + Sat*(Infl+Type+Cont), data = housing)
# multinomial model
library(nnet)
(house.mult<- multinom(Sat ~ Infl + Type + Cont, weights = Freq,
data = housing))
house.mult2 <- multinom(Sat ~ Infl*Type*Cont, weights = Freq,
data = housing)
anova(house.mult, house.mult2)
house.pm <- predict(house.mult, expand.grid(hnames[-1]), type = "probs")
cbind(expand.grid(hnames[-1]), round(house.pm, 2))
# proportional odds model
house.cpr <- apply(house.pr, 1, cumsum)
logit <- function(x) log(x/(1-x))
house.ld <- logit(house.cpr[2, ]) - logit(house.cpr[1, ])
(ratio <- sort(drop(house.ld)))
mean(ratio)
(house.plr <- polr(Sat ~ Infl + Type + Cont,
data = housing, weights = Freq))
house.pr1 <- predict(house.plr, expand.grid(hnames[-1]), type = "probs")
cbind(expand.grid(hnames[-1]), round(house.pr1, 2))
Fr <- matrix(housing$Freq, ncol = 3, byrow = TRUE)
2*sum(Fr*log(house.pr/house.pr1))
house.plr2 <- stepAIC(house.plr, ~.^2)
house.plr2$anova
```
r None
`logtrans` Estimate log Transformation Parameter
-------------------------------------------------
### Description
Find and optionally plot the marginal (profile) likelihood for alpha for a transformation model of the form `log(y + alpha) ~ x1 + x2 + ...`.
### Usage
```
logtrans(object, ...)
## Default S3 method:
logtrans(object, ..., alpha = seq(0.5, 6, by = 0.25) - min(y),
plotit = TRUE, interp =, xlab = "alpha",
ylab = "log Likelihood")
## S3 method for class 'formula'
logtrans(object, data, ...)
## S3 method for class 'lm'
logtrans(object, ...)
```
### Arguments
| | |
| --- | --- |
| `object` | Fitted linear model object, or formula defining the untransformed model that is `y ~ x1 + x2 + ...`. The function is generic. |
| `...` | If `object` is a formula, this argument may specify a data frame as for `lm`. |
| `alpha` | Set of values for the transformation parameter, alpha. |
| `plotit` | Should plotting be done? |
| `interp` | Should the marginal log-likelihood be interpolated with a spline approximation? (Default is `TRUE` if plotting is to be done and the number of real points is less than 100.) |
| `xlab` | as for `plot`. |
| `ylab` | as for `plot`. |
| `data` | optional `data` argument for `lm` fit. |
### Value
List with components `x` (for alpha) and `y` (for the marginal log-likelihood values).
### Side Effects
A plot of the marginal log-likelihood is produced, if requested, together with an approximate mle and 95% confidence interval.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<boxcox>`
### Examples
```
logtrans(Days ~ Age*Sex*Eth*Lrn, data = quine,
alpha = seq(0.75, 6.5, len=20))
```
r None
`bacteria` Presence of Bacteria after Drug Treatments
------------------------------------------------------
### Description
Tests of the presence of the bacteria *H. influenzae* in children with otitis media in the Northern Territory of Australia.
### Usage
```
bacteria
```
### Format
This data frame has 220 rows and the following columns:
y
presence or absence: a factor with levels `n` and `y`.
ap
active/placebo: a factor with levels `a` and `p`.
hilo
hi/low compliance: a factor with levels `hi` amd `lo`.
week
numeric: week of test.
ID
subject ID: a factor.
trt
a factor with levels `placebo`, `drug` and `drug+`, a re-coding of `ap` and `hilo`.
### Details
Dr A. Leach tested the effects of a drug on 50 children with a history of otitis media in the Northern Territory of Australia. The children were randomized to the drug or the a placebo, and also to receive active encouragement to comply with taking the drug.
The presence of *H. influenzae* was checked at weeks 0, 2, 4, 6 and 11: 30 of the checks were missing and are not included in this data frame.
### Source
Dr Amanda Leach *via* Mr James McBroom.
### References
Menzies School of Health Research 1999–2000 Annual Report. p.20. <http://www.menzies.edu.au/icms_docs/172302_2000_Annual_report.pdf>.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### Examples
```
contrasts(bacteria$trt) <- structure(contr.sdif(3),
dimnames = list(NULL, c("drug", "encourage")))
## fixed effects analyses
summary(glm(y ~ trt * week, binomial, data = bacteria))
summary(glm(y ~ trt + week, binomial, data = bacteria))
summary(glm(y ~ trt + I(week > 2), binomial, data = bacteria))
# conditional random-effects analysis
library(survival)
bacteria$Time <- rep(1, nrow(bacteria))
coxph(Surv(Time, unclass(y)) ~ week + strata(ID),
data = bacteria, method = "exact")
coxph(Surv(Time, unclass(y)) ~ factor(week) + strata(ID),
data = bacteria, method = "exact")
coxph(Surv(Time, unclass(y)) ~ I(week > 2) + strata(ID),
data = bacteria, method = "exact")
# PQL glmm analysis
library(nlme)
summary(glmmPQL(y ~ trt + I(week > 2), random = ~ 1 | ID,
family = binomial, data = bacteria))
```
r None
`nlschools` Eighth-Grade Pupils in the Netherlands
---------------------------------------------------
### Description
Snijders and Bosker (1999) use as a running example a study of 2287 eighth-grade pupils (aged about 11) in 132 classes in 131 schools in the Netherlands. Only the variables used in our examples are supplied.
### Usage
```
nlschools
```
### Format
This data frame contains 2287 rows and the following columns:
`lang`
language test score.
`IQ`
verbal IQ.
`class`
class ID.
`GS`
class size: number of eighth-grade pupils recorded in the class (there may be others: see `COMB`, and some may have been omitted with missing values).
`SES`
social-economic status of pupil's family.
`COMB`
were the pupils taught in a multi-grade class (`0/1`)? Classes which contained pupils from grades 7 and 8 are coded `1`, but only eighth-graders were tested.
### Source
Snijders, T. A. B. and Bosker, R. J. (1999) *Multilevel Analysis. An Introduction to Basic and Advanced Multilevel Modelling.* London: Sage.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### Examples
```
nl1 <- within(nlschools, {
IQave <- tapply(IQ, class, mean)[as.character(class)]
IQ <- IQ - IQave
})
cen <- c("IQ", "IQave", "SES")
nl1[cen] <- scale(nl1[cen], center = TRUE, scale = FALSE)
nl.lme <- nlme::lme(lang ~ IQ*COMB + IQave + SES,
random = ~ IQ | class, data = nl1)
## IGNORE_RDIFF_BEGIN
summary(nl.lme)
## IGNORE_RDIFF_END
```
r None
`Aids2` Australian AIDS Survival Data
--------------------------------------
### Description
Data on patients diagnosed with AIDS in Australia before 1 July 1991.
### Usage
```
Aids2
```
### Format
This data frame contains 2843 rows and the following columns:
`state`
Grouped state of origin: `"NSW "`includes ACT and `"other"` is WA, SA, NT and TAS.
`sex`
Sex of patient.
`diag`
(Julian) date of diagnosis.
`death`
(Julian) date of death or end of observation.
`status`
`"A"` (alive) or `"D"` (dead) at end of observation.
`T.categ`
Reported transmission category.
`age`
Age (years) at diagnosis.
### Note
This data set has been slightly jittered as a condition of its release, to ensure patient confidentiality.
### Source
Dr P. J. Solomon and the Australian National Centre in HIV Epidemiology and Clinical Research.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
r None
`leuk` Survival Times and White Blood Counts for Leukaemia Patients
--------------------------------------------------------------------
### Description
A data frame of data from 33 leukaemia patients.
### Usage
```
leuk
```
### Format
A data frame with columns:
`wbc`
white blood count.
`ag`
a test result, `"present"` or `"absent"`.
`time`
survival time in weeks.
### Details
Survival times are given for 33 patients who died from acute myelogenous leukaemia. Also measured was the patient's white blood cell count at the time of diagnosis. The patients were also factored into 2 groups according to the presence or absence of a morphologic characteristic of white blood cells. Patients termed AG positive were identified by the presence of Auer rods and/or significant granulation of the leukaemic cells in the bone marrow at the time of diagnosis.
### Source
Cox, D. R. and Oakes, D. (1984) *Analysis of Survival Data*. Chapman & Hall, p. 9.
Taken from
Feigl, P. & Zelen, M. (1965) Estimation of exponential survival probabilities with concomitant information. *Biometrics* **21**, 826–838.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### Examples
```
library(survival)
plot(survfit(Surv(time) ~ ag, data = leuk), lty = 2:3, col = 2:3)
# now Cox models
leuk.cox <- coxph(Surv(time) ~ ag + log(wbc), leuk)
summary(leuk.cox)
```
r None
`painters` The Painter's Data of de Piles
------------------------------------------
### Description
The subjective assessment, on a 0 to 20 integer scale, of 54 classical painters. The painters were assessed on four characteristics: composition, drawing, colour and expression. The data is due to the Eighteenth century art critic, de Piles.
### Usage
```
painters
```
### Format
The row names of the data frame are the painters. The components are:
`Composition`
Composition score.
`Drawing`
Drawing score.
`Colour`
Colour score.
`Expression`
Expression score.
`School`
The school to which a painter belongs, as indicated by a factor level code as follows: `"A"`: Renaissance; `"B"`: Mannerist; `"C"`: Seicento; `"D"`: Venetian; `"E"`: Lombard; `"F"`: Sixteenth Century; `"G"`: Seventeenth Century; `"H"`: French.
### Source
A. J. Weekes (1986) *A Genstat Primer.* Edward Arnold.
M. Davenport and G. Studdert-Kennedy (1972) The statistical analysis of aesthetic judgement: an exploration. *Applied Statistics* **21**, 324–333.
I. T. Jolliffe (1986) *Principal Component Analysis.* Springer.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
r None
`isoMDS` Kruskal's Non-metric Multidimensional Scaling
-------------------------------------------------------
### Description
One form of non-metric multidimensional scaling
### Usage
```
isoMDS(d, y = cmdscale(d, k), k = 2, maxit = 50, trace = TRUE,
tol = 1e-3, p = 2)
Shepard(d, x, p = 2)
```
### Arguments
| | |
| --- | --- |
| `d` | distance structure of the form returned by `dist`, or a full, symmetric matrix. Data are assumed to be dissimilarities or relative distances, but must be positive except for self-distance. Both missing and infinite values are allowed. |
| `y` | An initial configuration. If none is supplied, `cmdscale` is used to provide the classical solution, unless there are missing or infinite dissimilarities. |
| `k` | The desired dimension for the solution, passed to `cmdscale`. |
| `maxit` | The maximum number of iterations. |
| `trace` | Logical for tracing optimization. Default `TRUE`. |
| `tol` | convergence tolerance. |
| `p` | Power for Minkowski distance in the configuration space. |
| `x` | A final configuration. |
### Details
This chooses a k-dimensional (default k = 2) configuration to minimize the stress, the square root of the ratio of the sum of squared differences between the input distances and those of the configuration to the sum of configuration distances squared. However, the input distances are allowed a monotonic transformation.
An iterative algorithm is used, which will usually converge in around 10 iterations. As this is necessarily an *O(n^2)* calculation, it is slow for large datasets. Further, since for the default *p = 2* the configuration is only determined up to rotations and reflections (by convention the centroid is at the origin), the result can vary considerably from machine to machine.
### Value
Two components:
| | |
| --- | --- |
| `points` | A k-column vector of the fitted configuration. |
| `stress` | The final stress achieved (in percent). |
### Side Effects
If `trace` is true, the initial stress and the current stress are printed out every 5 iterations.
### References
T. F. Cox and M. A. A. Cox (1994, 2001) *Multidimensional Scaling*. Chapman & Hall.
Ripley, B. D. (1996) *Pattern Recognition and Neural Networks*. Cambridge University Press.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`[cmdscale](../../stats/html/cmdscale)`, `<sammon>`
### Examples
```
swiss.x <- as.matrix(swiss[, -1])
swiss.dist <- dist(swiss.x)
swiss.mds <- isoMDS(swiss.dist)
plot(swiss.mds$points, type = "n")
text(swiss.mds$points, labels = as.character(1:nrow(swiss.x)))
swiss.sh <- Shepard(swiss.dist, swiss.mds$points)
plot(swiss.sh, pch = ".")
lines(swiss.sh$x, swiss.sh$yf, type = "S")
```
| programming_docs |
r None
`cpus` Performance of Computer CPUs
------------------------------------
### Description
A relative performance measure and characteristics of 209 CPUs.
### Usage
```
cpus
```
### Format
The components are:
`name`
manufacturer and model.
`syct`
cycle time in nanoseconds.
`mmin`
minimum main memory in kilobytes.
`mmax`
maximum main memory in kilobytes.
`cach`
cache size in kilobytes.
`chmin`
minimum number of channels.
`chmax`
maximum number of channels.
`perf`
published performance on a benchmark mix relative to an IBM 370/158-3.
`estperf`
estimated performance (by Ein-Dor & Feldmesser).
### Source
P. Ein-Dor and J. Feldmesser (1987) Attributes of the performance of central processing units: a relative performance prediction model. *Comm. ACM.* **30**, 308–317.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
r None
`mcycle` Data from a Simulated Motorcycle Accident
---------------------------------------------------
### Description
A data frame giving a series of measurements of head acceleration in a simulated motorcycle accident, used to test crash helmets.
### Usage
```
mcycle
```
### Format
`times`
in milliseconds after impact.
`accel`
in g.
### Source
Silverman, B. W. (1985) Some aspects of the spline smoothing approach to non-parametric curve fitting. *Journal of the Royal Statistical Society series B* **47**, 1–52.
### References
Venables, W. N. and Ripley, B. D. (1999) *Modern Applied Statistics with S-PLUS.* Third Edition. Springer.
r None
`gilgais` Line Transect of Soil in Gilgai Territory
----------------------------------------------------
### Description
This dataset was collected on a line transect survey in gilgai territory in New South Wales, Australia. Gilgais are natural gentle depressions in otherwise flat land, and sometimes seem to be regularly distributed. The data collection was stimulated by the question: are these patterns reflected in soil properties? At each of 365 sampling locations on a linear grid of 4 meters spacing, samples were taken at depths 0-10 cm, 30-40 cm and 80-90 cm below the surface. pH, electrical conductivity and chloride content were measured on a 1:5 soil:water extract from each sample.
### Usage
```
gilgais
```
### Format
This data frame contains the following columns:
`pH00`
pH at depth 0–10 cm.
`pH30`
pH at depth 30–40 cm.
`pH80`
pH at depth 80–90 cm.
`e00`
electrical conductivity in mS/cm (0–10 cm).
`e30`
electrical conductivity in mS/cm (30–40 cm).
`e80`
electrical conductivity in mS/cm (80–90 cm).
`c00`
chloride content in ppm (0–10 cm).
`c30`
chloride content in ppm (30–40 cm).
`c80`
chloride content in ppm (80–90 cm).
### Source
Webster, R. (1977) Spectral analysis of gilgai soil. *Australian Journal of Soil Research* **15**, 191–204.
Laslett, G. M. (1989) Kriging and splines: An empirical comparison of their predictive performance in some applications (with discussion). *Journal of the American Statistical Association* **89**, 319–409
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
r None
`eqscplot` Plots with Geometrically Equal Scales
-------------------------------------------------
### Description
Version of a scatterplot with scales chosen to be equal on both axes, that is 1cm represents the same units on each
### Usage
```
eqscplot(x, y, ratio = 1, tol = 0.04, uin, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | vector of x values, or a 2-column matrix, or a list with components `x` and `y` |
| `y` | vector of y values |
| `ratio` | desired ratio of units on the axes. Units on the y axis are drawn at `ratio` times the size of units on the x axis. Ignored if `uin` is specified and of length 2. |
| `tol` | proportion of white space at the margins of plot |
| `uin` | desired values for the units-per-inch parameter. If of length 1, the desired units per inch on the x axis. |
| `...` | further arguments for `plot` and graphical parameters. Note that `par(xaxs="i", yaxs="i")` is enforced, and `xlim` and `ylim` will be adjusted accordingly. |
### Details
Limits for the x and y axes are chosen so that they include the data. One of the sets of limits is then stretched from the midpoint to make the units in the ratio given by `ratio`. Finally both are stretched by `1 + tol` to move points away from the axes, and the points plotted.
### Value
invisibly, the values of `uin` used for the plot.
### Side Effects
performs the plot.
### Note
Arguments `ratio` and `uin` were suggested by Bill Dunlap.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`[plot](../../graphics/html/plot.default)`, `[par](../../graphics/html/par)`
r None
`kde2d` Two-Dimensional Kernel Density Estimation
--------------------------------------------------
### Description
Two-dimensional kernel density estimation with an axis-aligned bivariate normal kernel, evaluated on a square grid.
### Usage
```
kde2d(x, y, h, n = 25, lims = c(range(x), range(y)))
```
### Arguments
| | |
| --- | --- |
| `x` | x coordinate of data |
| `y` | y coordinate of data |
| `h` | vector of bandwidths for x and y directions. Defaults to normal reference bandwidth (see `<bandwidth.nrd>`). A scalar value will be taken to apply to both directions. |
| `n` | Number of grid points in each direction. Can be scalar or a length-2 integer vector. |
| `lims` | The limits of the rectangle covered by the grid as `c(xl, xu, yl, yu)`. |
### Value
A list of three components.
| | |
| --- | --- |
| `x, y` | The x and y coordinates of the grid points, vectors of length `n`. |
| `z` | An `n[1]` by `n[2]` matrix of the estimated density: rows correspond to the value of `x`, columns to the value of `y`. |
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### Examples
```
attach(geyser)
plot(duration, waiting, xlim = c(0.5,6), ylim = c(40,100))
f1 <- kde2d(duration, waiting, n = 50, lims = c(0.5, 6, 40, 100))
image(f1, zlim = c(0, 0.05))
f2 <- kde2d(duration, waiting, n = 50, lims = c(0.5, 6, 40, 100),
h = c(width.SJ(duration), width.SJ(waiting)) )
image(f2, zlim = c(0, 0.05))
persp(f2, phi = 30, theta = 20, d = 5)
plot(duration[-272], duration[-1], xlim = c(0.5, 6),
ylim = c(1, 6),xlab = "previous duration", ylab = "duration")
f1 <- kde2d(duration[-272], duration[-1],
h = rep(1.5, 2), n = 50, lims = c(0.5, 6, 0.5, 6))
contour(f1, xlab = "previous duration",
ylab = "duration", levels = c(0.05, 0.1, 0.2, 0.4) )
f1 <- kde2d(duration[-272], duration[-1],
h = rep(0.6, 2), n = 50, lims = c(0.5, 6, 0.5, 6))
contour(f1, xlab = "previous duration",
ylab = "duration", levels = c(0.05, 0.1, 0.2, 0.4) )
f1 <- kde2d(duration[-272], duration[-1],
h = rep(0.4, 2), n = 50, lims = c(0.5, 6, 0.5, 6))
contour(f1, xlab = "previous duration",
ylab = "duration", levels = c(0.05, 0.1, 0.2, 0.4) )
detach("geyser")
```
r None
`stdres` Extract Standardized Residuals from a Linear Model
------------------------------------------------------------
### Description
The standardized residuals. These are normalized to unit variance, fitted including the current data point.
### Usage
```
stdres(object)
```
### Arguments
| | |
| --- | --- |
| `object` | any object representing a linear model. |
### Value
The vector of appropriately transformed residuals.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`[residuals](../../stats/html/residuals)`, `<studres>`
r None
`boxcox` Box-Cox Transformations for Linear Models
---------------------------------------------------
### Description
Computes and optionally plots profile log-likelihoods for the parameter of the Box-Cox power transformation.
### Usage
```
boxcox(object, ...)
## Default S3 method:
boxcox(object, lambda = seq(-2, 2, 1/10), plotit = TRUE,
interp, eps = 1/50, xlab = expression(lambda),
ylab = "log-Likelihood", ...)
## S3 method for class 'formula'
boxcox(object, lambda = seq(-2, 2, 1/10), plotit = TRUE,
interp, eps = 1/50, xlab = expression(lambda),
ylab = "log-Likelihood", ...)
## S3 method for class 'lm'
boxcox(object, lambda = seq(-2, 2, 1/10), plotit = TRUE,
interp, eps = 1/50, xlab = expression(lambda),
ylab = "log-Likelihood", ...)
```
### Arguments
| | |
| --- | --- |
| `object` | a formula or fitted model object. Currently only `lm` and `aov` objects are handled. |
| `lambda` | vector of values of `lambda` – default *(-2, 2)* in steps of 0.1. |
| `plotit` | logical which controls whether the result should be plotted. |
| `interp` | logical which controls whether spline interpolation is used. Default to `TRUE` if plotting with `lambda` of length less than 100. |
| `eps` | Tolerance for `lambda = 0`; defaults to 0.02. |
| `xlab` | defaults to `"lambda"`. |
| `ylab` | defaults to `"log-Likelihood"`. |
| `...` | additional parameters to be used in the model fitting. |
### Value
A list of the `lambda` vector and the computed profile log-likelihood vector, invisibly if the result is plotted.
### Side Effects
If `plotit = TRUE` plots log-likelihood *vs* `lambda` and indicates a 95% confidence interval about the maximum observed value of `lambda`. If `interp = TRUE`, spline interpolation is used to give a smoother plot.
### References
Box, G. E. P. and Cox, D. R. (1964) An analysis of transformations (with discussion). *Journal of the Royal Statistical Society B*, **26**, 211–252.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### Examples
```
boxcox(Volume ~ log(Height) + log(Girth), data = trees,
lambda = seq(-0.25, 0.25, length = 10))
boxcox(Days+1 ~ Eth*Sex*Age*Lrn, data = quine,
lambda = seq(-0.05, 0.45, len = 20))
```
r None
`hills` Record Times in Scottish Hill Races
--------------------------------------------
### Description
The record times in 1984 for 35 Scottish hill races.
### Usage
```
hills
```
### Format
The components are:
`dist`
distance in miles (on the map).
`climb`
total height gained during the route, in feet.
`time`
record time in minutes.
### Source
A.C. Atkinson (1986) Comment: Aspects of diagnostic regression analysis. *Statistical Science* **1**, 397–402.
[A.C. Atkinson (1988) Transformations unmasked. *Technometrics* **30**, 311–318 “corrects” the time for Knock Hill from 78.65 to 18.65. It is unclear if this based on the original records.]
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
r None
`truehist` Plot a Histogram
----------------------------
### Description
Creates a histogram on the current graphics device.
### Usage
```
truehist(data, nbins = "Scott", h, x0 = -h/1000,
breaks, prob = TRUE, xlim = range(breaks),
ymax = max(est), col = "cyan",
xlab = deparse(substitute(data)), bty = "n", ...)
```
### Arguments
| | |
| --- | --- |
| `data` | numeric vector of data for histogram. Missing values (`NA`s) are allowed and omitted. |
| `nbins` | The suggested number of bins. Either a positive integer, or a character string naming a rule: `"Scott"` or `"Freedman-Diaconis"` or `"FD"`. (Case is ignored.) |
| `h` | The bin width, a strictly positive number (takes precedence over `nbins`). |
| `x0` | Shift for the bins - the breaks are at `x0 + h * (..., -1, 0, 1, ...)` |
| `breaks` | The set of breakpoints to be used. (Usually omitted, takes precedence over `h` and `nbins`). |
| `prob` | If true (the default) plot a true histogram. The vertical axis has a *relative frequency density* scale, so the product of the dimensions of any panel gives the relative frequency. Hence the total area under the histogram is 1 and it is directly comparable with most other estimates of the probability density function. If false plot the counts in the bins. |
| `xlim` | The limits for the x-axis. |
| `ymax` | The upper limit for the y-axis. |
| `col` | The colour for the bar fill: the default is colour 5 in the default **R** palette. |
| `xlab` | label for the plot x-axis. By default, this will be the name of `data`. |
| `bty` | The box type for the plot - defaults to none. |
| `...` | additional arguments to `[rect](../../graphics/html/rect)` or `[plot](../../graphics/html/plot.default)`. |
### Details
This plots a true histogram, a density estimate of total area 1. If `breaks` is specified, those breakpoints are used. Otherwise if `h` is specified, a regular grid of bins is used with width `h`. If neither `breaks` nor `h` is specified, `nbins` is used to select a suitable `h`.
### Side Effects
A histogram is plotted on the current device.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`[hist](../../graphics/html/hist)`
r None
`newcomb` Newcomb's Measurements of the Passage Time of Light
--------------------------------------------------------------
### Description
A numeric vector giving the ‘Third Series’ of measurements of the passage time of light recorded by Newcomb in 1882. The given values divided by 1000 plus 24.8 give the time in millionths of a second for light to traverse a known distance. The ‘true’ value is now considered to be 33.02.
The dataset is given in the order in Staudte and Sheather. Stigler (1977, Table 5) gives the dataset as
```
28 26 33 24 34 -44 27 16 40 -2 29 22 24 21 25 30 23 29 31 19
24 20 36 32 36 28 25 21 28 29 37 25 28 26 30 32 36 26 30 22
36 23 27 27 28 27 31 27 26 33 26 32 32 24 39 28 24 25 32 25
29 27 28 29 16 23
```
However, order is not relevant to its use as an example of robust estimation. (Thanks to Anthony Unwin for bringing this difference to our attention.)
### Usage
```
newcomb
```
### Source
S. M. Stigler (1973) Simon Newcomb, Percy Daniell, and the history of robust estimation 1885–1920. *Journal of the American Statistical Association* **68**, 872–879.
S. M. Stigler (1977) Do robust estimators work with *real* data? *Annals of Statistics*, **5**, 1055–1098.
R. G. Staudte and S. J. Sheather (1990) *Robust Estimation and Testing.* Wiley.
r None
`coop` Co-operative Trial in Analytical Chemistry
--------------------------------------------------
### Description
Seven specimens were sent to 6 laboratories in 3 separate batches and each analysed for Analyte. Each analysis was duplicated.
### Usage
```
coop
```
### Format
This data frame contains the following columns:
`Lab`
Laboratory, `L1`, `L2`, ..., `L6`.
`Spc`
Specimen, `S1`, `S2`, ..., `S7`.
`Bat`
Batch, `B1`, `B2`, `B3` (nested within `Spc/Lab`),
`Conc`
Concentration of Analyte in *g/kg*.
### Source
Analytical Methods Committee (1987) Recommendations for the conduct and interpretation of co-operative trials, *The Analyst* **112**, 679–686.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<chem>`, `<abbey>`.
r None
`deaths` Monthly Deaths from Lung Diseases in the UK
-----------------------------------------------------
### Description
A time series giving the monthly deaths from bronchitis, emphysema and asthma in the UK, 1974-1979, both sexes (`deaths`),
### Usage
```
deaths
```
### Source
P. J. Diggle (1990) *Time Series: A Biostatistical Introduction.* Oxford, table A.3
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
This the same as dataset `[ldeaths](../../datasets/html/uklungdeaths)` in **R**'s datasets package.
r None
`survey` Student Survey Data
-----------------------------
### Description
This data frame contains the responses of 237 Statistics I students at the University of Adelaide to a number of questions.
### Usage
```
survey
```
### Format
The components of the data frame are:
`Sex`
The sex of the student. (Factor with levels `"Male"` and `"Female"`.)
`Wr.Hnd`
span (distance from tip of thumb to tip of little finger of spread hand) of writing hand, in centimetres.
`NW.Hnd`
span of non-writing hand.
`W.Hnd`
writing hand of student. (Factor, with levels `"Left"` and `"Right"`.)
`Fold`
“Fold your arms! Which is on top” (Factor, with levels `"R on L"`, `"L on R"`, `"Neither"`.)
`Pulse`
pulse rate of student (beats per minute).
`Clap`
‘Clap your hands! Which hand is on top?’ (Factor, with levels `"Right"`, `"Left"`, `"Neither"`.)
`Exer`
how often the student exercises. (Factor, with levels `"Freq"` (frequently), `"Some"`, `"None"`.)
`Smoke`
how much the student smokes. (Factor, levels `"Heavy"`, `"Regul"` (regularly), `"Occas"` (occasionally), `"Never"`.)
`Height`
height of the student in centimetres.
`M.I`
whether the student expressed height in imperial (feet/inches) or metric (centimetres/metres) units. (Factor, levels `"Metric"`, `"Imperial"`.)
`Age`
age of the student in years.
### References
Venables, W. N. and Ripley, B. D. (1999) *Modern Applied Statistics with S-PLUS.* Third Edition. Springer.
r None
`rms.curv` Relative Curvature Measures for Non-Linear Regression
-----------------------------------------------------------------
### Description
Calculates the root mean square parameter effects and intrinsic relative curvatures, *c^theta* and *c^iota*, for a fitted nonlinear regression, as defined in Bates & Watts, section 7.3, p. 253ff
### Usage
```
rms.curv(obj)
```
### Arguments
| | |
| --- | --- |
| `obj` | Fitted model object of class `"nls"`. The model must be fitted using the default algorithm. |
### Details
The method of section 7.3.1 of Bates & Watts is implemented. The function `deriv3` should be used generate a model function with first derivative (gradient) matrix and second derivative (Hessian) array attributes. This function should then be used to fit the nonlinear regression model.
A print method, `print.rms.curv`, prints the `pc` and `ic` components only, suitably annotated.
If either `pc` or `ic` exceeds some threshold (0.3 has been suggested) the curvature is unacceptably high for the planar assumption.
### Value
A list of class `rms.curv` with components `pc` and `ic` for parameter effects and intrinsic relative curvatures multiplied by sqrt(F), `ct` and `ci` for *c^θ* and *c^ι* (unmultiplied), and `C` the C-array as used in section 7.3.1 of Bates & Watts.
### References
Bates, D. M, and Watts, D. G. (1988) *Nonlinear Regression Analysis and its Applications.* Wiley, New York.
### See Also
`[deriv3](../../stats/html/deriv)`
### Examples
```
# The treated sample from the Puromycin data
mmcurve <- deriv3(~ Vm * conc/(K + conc), c("Vm", "K"),
function(Vm, K, conc) NULL)
Treated <- Puromycin[Puromycin$state == "treated", ]
(Purfit1 <- nls(rate ~ mmcurve(Vm, K, conc), data = Treated,
start = list(Vm=200, K=0.1)))
rms.curv(Purfit1)
##Parameter effects: c^theta x sqrt(F) = 0.2121
## Intrinsic: c^iota x sqrt(F) = 0.092
```
r None
`fgl` Measurements of Forensic Glass Fragments
-----------------------------------------------
### Description
The `fgl` data frame has 214 rows and 10 columns. It was collected by B. German on fragments of glass collected in forensic work.
### Usage
```
fgl
```
### Format
This data frame contains the following columns:
`RI`
refractive index; more precisely the refractive index is 1.518xxxx.
The next 8 measurements are percentages by weight of oxides.
`Na`
sodium.
`Mg`
manganese.
`Al`
aluminium.
`Si`
silicon.
`K`
potassium.
`Ca`
calcium.
`Ba`
barium.
`Fe`
iron.
`type`
The fragments were originally classed into seven types, one of which was absent in this dataset. The categories which occur are window float glass (`WinF`: 70), window non-float glass (`WinNF`: 76), vehicle window glass (`Veh`: 17), containers (`Con`: 13), tableware (`Tabl`: 9) and vehicle headlamps (`Head`: 29).
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
| programming_docs |
r None
`topo` Spatial Topographic Data
--------------------------------
### Description
The `topo` data frame has 52 rows and 3 columns, of topographic heights within a 310 feet square.
### Usage
```
topo
```
### Format
This data frame contains the following columns:
`x`
x coordinates (units of 50 feet)
`y`
y coordinates (units of 50 feet)
`z`
heights (feet)
### Source
Davis, J.C. (1973) *Statistics and Data Analysis in Geology.* Wiley.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
r None
`contr.sdif` Successive Differences Contrast Coding
----------------------------------------------------
### Description
A coding for factors based on successive differences.
### Usage
```
contr.sdif(n, contrasts = TRUE, sparse = FALSE)
```
### Arguments
| | |
| --- | --- |
| `n` | The number of levels required. |
| `contrasts` | logical: Should there be `n - 1` columns orthogonal to the mean (the default) or `n` columns spanning the space? |
| `sparse` | logical. If true and the result would be sparse (only true for `contrasts = FALSE`), return a sparse matrix. |
### Details
The contrast coefficients are chosen so that the coded coefficients in a one-way layout are the differences between the means of the second and first levels, the third and second levels, and so on. This makes most sense for ordered factors, but does not assume that the levels are equally spaced.
### Value
If `contrasts` is `TRUE`, a matrix with `n` rows and `n - 1` columns, and the `n` by `n` identity matrix if `contrasts` is `FALSE`.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth Edition, Springer.
### See Also
`[contr.treatment](../../stats/html/contrast)`, `[contr.sum](../../stats/html/contrast)`, `[contr.helmert](../../stats/html/contrast)`.
### Examples
```
(A <- contr.sdif(6))
zapsmall(ginv(A))
```
r None
`gehan` Remission Times of Leukaemia Patients
----------------------------------------------
### Description
A data frame from a trial of 42 leukaemia patients. Some were treated with the drug *6-mercaptopurine* and the rest are controls. The trial was designed as matched pairs, both withdrawn from the trial when either came out of remission.
### Usage
```
gehan
```
### Format
This data frame contains the following columns:
`pair`
label for pair.
`time`
remission time in weeks.
`cens`
censoring, 0/1.
`treat`
treatment, control or 6-MP.
### Source
Cox, D. R. and Oakes, D. (1984) *Analysis of Survival Data.* Chapman & Hall, p. 7. Taken from
Gehan, E.A. (1965) A generalized Wilcoxon test for comparing arbitrarily single-censored samples. *Biometrika* **52**, 203–233.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### Examples
```
library(survival)
gehan.surv <- survfit(Surv(time, cens) ~ treat, data = gehan,
conf.type = "log-log")
summary(gehan.surv)
survreg(Surv(time, cens) ~ factor(pair) + treat, gehan, dist = "exponential")
summary(survreg(Surv(time, cens) ~ treat, gehan, dist = "exponential"))
summary(survreg(Surv(time, cens) ~ treat, gehan))
gehan.cox <- coxph(Surv(time, cens) ~ treat, gehan)
summary(gehan.cox)
```
r None
`ucv` Unbiased Cross-Validation for Bandwidth Selection
--------------------------------------------------------
### Description
Uses unbiased cross-validation to select the bandwidth of a Gaussian kernel density estimator.
### Usage
```
ucv(x, nb = 1000, lower, upper)
```
### Arguments
| | |
| --- | --- |
| `x` | a numeric vector |
| `nb` | number of bins to use. |
| `lower, upper` | Range over which to minimize. The default is almost always satisfactory. |
### Value
a bandwidth.
### References
Scott, D. W. (1992) *Multivariate Density Estimation: Theory, Practice, and Visualization.* Wiley.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<bcv>`, `[width.SJ](width.sj)`, `[density](../../stats/html/density)`
### Examples
```
ucv(geyser$duration)
```
r None
`Null` Null Spaces of Matrices
-------------------------------
### Description
Given a matrix, `M`, find a matrix `N` giving a basis for the (left) null space. That is `crossprod(N, M) = t(N) %*% M` is an all-zero matrix and `N` has the maximum number of linearly independent columns.
### Usage
```
Null(M)
```
### Arguments
| | |
| --- | --- |
| `M` | Input matrix. A vector is coerced to a 1-column matrix. |
### Details
For a basis for the (right) null space *{x : Mx = 0}*, use `Null(t(M))`.
### Value
The matrix `N` with the basis for the (left) null space, or a matrix with zero columns if the matrix `M` is square and of maximal rank.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`[qr](../../base/html/qr)`, `[qr.Q](../../matrix/html/sparseqr-class)`.
### Examples
```
# The function is currently defined as
function(M)
{
tmp <- qr(M)
set <- if(tmp$rank == 0L) seq_len(ncol(M)) else -seq_len(tmp$rank)
qr.Q(tmp, complete = TRUE)[, set, drop = FALSE]
}
```
r None
`road` Road Accident Deaths in US States
-----------------------------------------
### Description
A data frame with the annual deaths in road accidents for half the US states.
### Usage
```
road
```
### Format
Columns are:
`state`
name.
`deaths`
number of deaths.
`drivers`
number of drivers (in 10,000s).
`popden`
population density in people per square mile.
`rural`
length of rural roads, in 1000s of miles.
`temp`
average daily maximum temperature in January.
`fuel`
fuel consumption in 10,000,000 US gallons per year.
### Source
Imperial College, London M.Sc. exercise
r None
`predict.lda` Classify Multivariate Observations by Linear Discrimination
--------------------------------------------------------------------------
### Description
Classify multivariate observations in conjunction with `lda`, and also project data onto the linear discriminants.
### Usage
```
## S3 method for class 'lda'
predict(object, newdata, prior = object$prior, dimen,
method = c("plug-in", "predictive", "debiased"), ...)
```
### Arguments
| | |
| --- | --- |
| `object` | object of class `"lda"` |
| `newdata` | data frame of cases to be classified or, if `object` has a formula, a data frame with columns of the same names as the variables used. A vector will be interpreted as a row vector. If newdata is missing, an attempt will be made to retrieve the data used to fit the `lda` object. |
| `prior` | The prior probabilities of the classes, by default the proportions in the training set or what was set in the call to `lda`. |
| `dimen` | the dimension of the space to be used. If this is less than `min(p, ng-1)`, only the first `dimen` discriminant components are used (except for `method="predictive"`), and only those dimensions are returned in `x`. |
| `method` | This determines how the parameter estimation is handled. With `"plug-in"` (the default) the usual unbiased parameter estimates are used and assumed to be correct. With `"debiased"` an unbiased estimator of the log posterior probabilities is used, and with `"predictive"` the parameter estimates are integrated out using a vague prior. |
| `...` | arguments based from or to other methods |
### Details
This function is a method for the generic function `predict()` for class `"lda"`. It can be invoked by calling `predict(x)` for an object `x` of the appropriate class, or directly by calling `predict.lda(x)` regardless of the class of the object.
Missing values in `newdata` are handled by returning `NA` if the linear discriminants cannot be evaluated. If `newdata` is omitted and the `na.action` of the fit omitted cases, these will be omitted on the prediction.
This version centres the linear discriminants so that the weighted mean (weighted by `prior`) of the group centroids is at the origin.
### Value
a list with components
| | |
| --- | --- |
| `class` | The MAP classification (a factor) |
| `posterior` | posterior probabilities for the classes |
| `x` | the scores of test cases on up to `dimen` discriminant variables |
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
Ripley, B. D. (1996) *Pattern Recognition and Neural Networks*. Cambridge University Press.
### See Also
`<lda>`, `<qda>`, `<predict.qda>`
### Examples
```
tr <- sample(1:50, 25)
train <- rbind(iris3[tr,,1], iris3[tr,,2], iris3[tr,,3])
test <- rbind(iris3[-tr,,1], iris3[-tr,,2], iris3[-tr,,3])
cl <- factor(c(rep("s",25), rep("c",25), rep("v",25)))
z <- lda(train, cl)
predict(z, test)$class
```
r None
`bandwidth.nrd` Bandwidth for density() via Normal Reference Distribution
--------------------------------------------------------------------------
### Description
A well-supported rule-of-thumb for choosing the bandwidth of a Gaussian kernel density estimator.
### Usage
```
bandwidth.nrd(x)
```
### Arguments
| | |
| --- | --- |
| `x` | A data vector. |
### Value
A bandwidth on a scale suitable for the `width` argument of `density`.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Springer, equation (5.5) on page 130.
### Examples
```
# The function is currently defined as
function(x)
{
r <- quantile(x, c(0.25, 0.75))
h <- (r[2] - r[1])/1.34
4 * 1.06 * min(sqrt(var(x)), h) * length(x)^(-1/5)
}
```
r None
`mvrnorm` Simulate from a Multivariate Normal Distribution
-----------------------------------------------------------
### Description
Produces one or more samples from the specified multivariate normal distribution.
### Usage
```
mvrnorm(n = 1, mu, Sigma, tol = 1e-6, empirical = FALSE, EISPACK = FALSE)
```
### Arguments
| | |
| --- | --- |
| `n` | the number of samples required. |
| `mu` | a vector giving the means of the variables. |
| `Sigma` | a positive-definite symmetric matrix specifying the covariance matrix of the variables. |
| `tol` | tolerance (relative to largest variance) for numerical lack of positive-definiteness in `Sigma`. |
| `empirical` | logical. If true, mu and Sigma specify the empirical not population mean and covariance matrix. |
| `EISPACK` | logical: values other than `FALSE` are an error. |
### Details
The matrix decomposition is done via `eigen`; although a Choleski decomposition might be faster, the eigendecomposition is stabler.
### Value
If `n = 1` a vector of the same length as `mu`, otherwise an `n` by `length(mu)` matrix with one sample in each row.
### Side Effects
Causes creation of the dataset `.Random.seed` if it does not already exist, otherwise its value is updated.
### References
B. D. Ripley (1987) *Stochastic Simulation.* Wiley. Page 98.
### See Also
`[rnorm](../../stats/html/normal)`
### Examples
```
Sigma <- matrix(c(10,3,3,2),2,2)
Sigma
var(mvrnorm(n = 1000, rep(0, 2), Sigma))
var(mvrnorm(n = 1000, rep(0, 2), Sigma, empirical = TRUE))
```
r None
`corresp` Simple Correspondence Analysis
-----------------------------------------
### Description
Find the principal canonical correlation and corresponding row- and column-scores from a correspondence analysis of a two-way contingency table.
### Usage
```
corresp(x, ...)
## S3 method for class 'matrix'
corresp(x, nf = 1, ...)
## S3 method for class 'factor'
corresp(x, y, ...)
## S3 method for class 'data.frame'
corresp(x, ...)
## S3 method for class 'xtabs'
corresp(x, ...)
## S3 method for class 'formula'
corresp(formula, data, ...)
```
### Arguments
| | |
| --- | --- |
| `x, formula` | The function is generic, accepting various forms of the principal argument for specifying a two-way frequency table. Currently accepted forms are matrices, data frames (coerced to frequency tables), objects of class `"[xtabs](../../stats/html/xtabs)"` and formulae of the form `~ F1 + F2`, where `F1` and `F2` are factors. |
| `nf` | The number of factors to be computed. Note that although 1 is the most usual, one school of thought takes the first two singular vectors for a sort of biplot. |
| `y` | a second factor for a cross-classification. |
| `data` | an optional data frame, list or environment against which to preferentially resolve variables in the formula. |
| `...` | If the principal argument is a formula, a data frame may be specified as well from which variables in the formula are preferentially satisfied. |
### Details
See Venables & Ripley (2002). The `plot` method produces a graphical representation of the table if `nf=1`, with the *areas* of circles representing the numbers of points. If `nf` is two or more the `biplot` method is called, which plots the second and third columns of the matrices `A = Dr^(-1/2) U L` and `B = Dc^(-1/2) V L` where the singular value decomposition is `U L V`. Thus the x-axis is the canonical correlation times the row and column scores. Although this is called a biplot, it does *not* have any useful inner product relationship between the row and column scores. Think of this as an equally-scaled plot with two unrelated sets of labels. The origin is marked on the plot with a cross. (For other versions of this plot see the book.)
### Value
An list object of class `"correspondence"` for which `print`, `plot` and `biplot` methods are supplied. The main components are the canonical correlation(s) and the row and column scores.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
Gower, J. C. and Hand, D. J. (1996) *Biplots.* Chapman & Hall.
### See Also
`[svd](../../base/html/svd)`, `[princomp](../../stats/html/princomp)`.
### Examples
```
(ct <- corresp(~ Age + Eth, data = quine))
plot(ct)
corresp(caith)
biplot(corresp(caith, nf = 2))
```
r None
`caith` Colours of Eyes and Hair of People in Caithness
--------------------------------------------------------
### Description
Data on the cross-classification of people in Caithness, Scotland, by eye and hair colour. The region of the UK is particularly interesting as there is a mixture of people of Nordic, Celtic and Anglo-Saxon origin.
### Usage
```
caith
```
### Format
A 4 by 5 table with rows the eye colours (blue, light, medium, dark) and columns the hair colours (fair, red, medium, dark, black).
### Source
Fisher, R.A. (1940) The precision of discriminant functions. *Annals of Eugenics (London)* **10**, 422–429.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### Examples
```
corresp(caith)
dimnames(caith)[[2]] <- c("F", "R", "M", "D", "B")
par(mfcol=c(1,3))
plot(corresp(caith, nf=2)); title("symmetric")
plot(corresp(caith, nf=2), type="rows"); title("rows")
plot(corresp(caith, nf=2), type="col"); title("columns")
par(mfrow=c(1,1))
```
r None
`ships` Ships Damage Data
--------------------------
### Description
Data frame giving the number of damage incidents and aggregate months of service by ship type, year of construction, and period of operation.
### Usage
```
ships
```
### Format
`type`
type: `"A"` to `"E"`.
`year`
year of construction: 1960–64, 65–69, 70–74, 75–79 (coded as `"60"`, `"65"`, `"70"`, `"75"`).
`period`
period of operation : 1960–74, 75–79.
`service`
aggregate months of service.
`incidents`
number of damage incidents.
### Source
P. McCullagh and J. A. Nelder, (1983), *Generalized Linear Models.* Chapman & Hall, section 6.3.2, page 137
r None
`quine` Absenteeism from School in Rural New South Wales
---------------------------------------------------------
### Description
The `quine` data frame has 146 rows and 5 columns. Children from Walgett, New South Wales, Australia, were classified by Culture, Age, Sex and Learner status and the number of days absent from school in a particular school year was recorded.
### Usage
```
quine
```
### Format
This data frame contains the following columns:
`Eth`
ethnic background: Aboriginal or Not, (`"A"` or `"N"`).
`Sex`
sex: factor with levels (`"F"` or `"M"`).
`Age`
age group: Primary (`"F0"`), or forms `"F1,"` `"F2"` or `"F3"`.
`Lrn`
learner status: factor with levels Average or Slow learner, (`"AL"` or `"SL"`).
`Days`
days absent from school in the year.
### Source
S. Quine, quoted in Aitkin, M. (1978) The analysis of unbalanced cross classifications (with discussion). *Journal of the Royal Statistical Society series A* **141**, 195–223.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
r None
`plot.mca` Plot Method for Objects of Class 'mca'
--------------------------------------------------
### Description
Plot a multiple correspondence analysis.
### Usage
```
## S3 method for class 'mca'
plot(x, rows = TRUE, col, cex = par("cex"), ...)
```
### Arguments
| | |
| --- | --- |
| `x` | An object of class `"mca"`. |
| `rows` | Should the coordinates for the rows be plotted, or just the vertices for the levels? |
| `col, cex` | The colours and `cex` to be used for the row points and level vertices respectively. |
| `...` | Additional parameters to `plot`. |
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<mca>`, `<predict.mca>`
### Examples
```
plot(mca(farms, abbrev = TRUE))
```
r None
`confint` Confidence Intervals for Model Parameters
----------------------------------------------------
### Description
Computes confidence intervals for one or more parameters in a fitted model. Package MASS adds methods for `glm` and `nls` fits.
### Usage
```
## S3 method for class 'glm'
confint(object, parm, level = 0.95, trace = FALSE, ...)
## S3 method for class 'nls'
confint(object, parm, level = 0.95, ...)
```
### Arguments
| | |
| --- | --- |
| `object` | a fitted model object. Methods currently exist for the classes `"glm"`, `"nls"` and for profile objects from these classes. |
| `parm` | a specification of which parameters are to be given confidence intervals, either a vector of numbers or a vector of names. If missing, all parameters are considered. |
| `level` | the confidence level required. |
| `trace` | logical. Should profiling be traced? |
| `...` | additional argument(s) for methods. |
### Details
`[confint](../../stats/html/confint)` is a generic function in package `stats`.
These `confint` methods call the appropriate profile method, then find the confidence intervals by interpolation in the profile traces. If the profile object is already available it should be used as the main argument rather than the fitted model object itself.
### Value
A matrix (or vector) with columns giving lower and upper confidence limits for each parameter. These will be labelled as (1 - level)/2 and 1 - (1 - level)/2 in % (by default 2.5% and 97.5%).
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`[confint](../../stats/html/confint)` (the generic and `"lm"` method), `[profile](../../stats/html/profile)`
### Examples
```
expn1 <- deriv(y ~ b0 + b1 * 2^(-x/th), c("b0", "b1", "th"),
function(b0, b1, th, x) {})
wtloss.gr <- nls(Weight ~ expn1(b0, b1, th, Days),
data = wtloss, start = c(b0=90, b1=95, th=120))
expn2 <- deriv(~b0 + b1*((w0 - b0)/b1)^(x/d0),
c("b0","b1","d0"), function(b0, b1, d0, x, w0) {})
wtloss.init <- function(obj, w0) {
p <- coef(obj)
d0 <- - log((w0 - p["b0"])/p["b1"])/log(2) * p["th"]
c(p[c("b0", "b1")], d0 = as.vector(d0))
}
out <- NULL
w0s <- c(110, 100, 90)
for(w0 in w0s) {
fm <- nls(Weight ~ expn2(b0, b1, d0, Days, w0),
wtloss, start = wtloss.init(wtloss.gr, w0))
out <- rbind(out, c(coef(fm)["d0"], confint(fm, "d0")))
}
dimnames(out) <- list(paste(w0s, "kg:"), c("d0", "low", "high"))
out
ldose <- rep(0:5, 2)
numdead <- c(1, 4, 9, 13, 18, 20, 0, 2, 6, 10, 12, 16)
sex <- factor(rep(c("M", "F"), c(6, 6)))
SF <- cbind(numdead, numalive = 20 - numdead)
budworm.lg0 <- glm(SF ~ sex + ldose - 1, family = binomial)
confint(budworm.lg0)
confint(budworm.lg0, "ldose")
```
| programming_docs |
r None
`Animals` Brain and Body Weights for 28 Species
------------------------------------------------
### Description
Average brain and body weights for 28 species of land animals.
### Usage
```
Animals
```
### Format
`body`
body weight in kg.
`brain`
brain weight in g.
### Note
The name `Animals` avoids conflicts with a system dataset `animals` in S-PLUS 4.5 and later.
### Source
P. J. Rousseeuw and A. M. Leroy (1987) *Robust Regression and Outlier Detection.* Wiley, p. 57.
### References
Venables, W. N. and Ripley, B. D. (1999) *Modern Applied Statistics with S-PLUS.* Third Edition. Springer.
r None
`drivers` Deaths of Car Drivers in Great Britain 1969-84
---------------------------------------------------------
### Description
A regular time series giving the monthly totals of car drivers in Great Britain killed or seriously injured Jan 1969 to Dec 1984. Compulsory wearing of seat belts was introduced on 31 Jan 1983
### Usage
```
drivers
```
### Source
Harvey, A.C. (1989) *Forecasting, Structural Time Series Models and the Kalman Filter.* Cambridge University Press, pp. 519–523.
### References
Venables, W. N. and Ripley, B. D. (1999) *Modern Applied Statistics with S-PLUS.* Third Edition. Springer.
r None
`phones` Belgium Phone Calls 1950-1973
---------------------------------------
### Description
A list object with the annual numbers of telephone calls, in Belgium. The components are:
`year`
last two digits of the year.
`calls`
number of telephone calls made (in millions of calls).
### Usage
```
phones
```
### Source
P. J. Rousseeuw and A. M. Leroy (1987) *Robust Regression & Outlier Detection.* Wiley.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
r None
`petrol` N. L. Prater's Petrol Refinery Data
---------------------------------------------
### Description
The yield of a petroleum refining process with four covariates. The crude oil appears to come from only 10 distinct samples.
These data were originally used by Prater (1956) to build an estimation equation for the yield of the refining process of crude oil to gasoline.
### Usage
```
petrol
```
### Format
The variables are as follows
`No`
crude oil sample identification label. (Factor.)
`SG`
specific gravity, degrees API. (Constant within sample.)
`VP`
vapour pressure in pounds per square inch. (Constant within sample.)
`V10`
volatility of crude; ASTM 10% point. (Constant within sample.)
`EP`
desired volatility of gasoline. (The end point. Varies within sample.)
`Y`
yield as a percentage of crude.
### Source
N. H. Prater (1956) Estimate gasoline yields from crudes. *Petroleum Refiner* **35**, 236–238.
This dataset is also given in D. J. Hand, F. Daly, K. McConway, D. Lunn and E. Ostrowski (eds) (1994) *A Handbook of Small Data Sets.* Chapman & Hall.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### Examples
```
library(nlme)
Petrol <- petrol
Petrol[, 2:5] <- scale(as.matrix(Petrol[, 2:5]), scale = FALSE)
pet3.lme <- lme(Y ~ SG + VP + V10 + EP,
random = ~ 1 | No, data = Petrol)
pet3.lme <- update(pet3.lme, method = "ML")
pet4.lme <- update(pet3.lme, fixed = Y ~ V10 + EP)
anova(pet4.lme, pet3.lme)
```
r None
`anorexia` Anorexia Data on Weight Change
------------------------------------------
### Description
The `anorexia` data frame has 72 rows and 3 columns. Weight change data for young female anorexia patients.
### Usage
```
anorexia
```
### Format
This data frame contains the following columns:
`Treat`
Factor of three levels: `"Cont"` (control), `"CBT"` (Cognitive Behavioural treatment) and `"FT"` (family treatment).
`Prewt`
Weight of patient before study period, in lbs.
`Postwt`
Weight of patient after study period, in lbs.
### Source
Hand, D. J., Daly, F., McConway, K., Lunn, D. and Ostrowski, E. eds (1993) *A Handbook of Small Data Sets.* Chapman & Hall, Data set 285 (p. 229)
(Note that the original source mistakenly says that weights are in kg.)
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
r None
`polr` Ordered Logistic or Probit Regression
---------------------------------------------
### Description
Fits a logistic or probit regression model to an ordered factor response. The default logistic case is *proportional odds logistic regression*, after which the function is named.
### Usage
```
polr(formula, data, weights, start, ..., subset, na.action,
contrasts = NULL, Hess = FALSE, model = TRUE,
method = c("logistic", "probit", "loglog", "cloglog", "cauchit"))
```
### Arguments
| | |
| --- | --- |
| `formula` | a formula expression as for regression models, of the form `response ~ predictors`. The response should be a factor (preferably an ordered factor), which will be interpreted as an ordinal response, with levels ordered as in the factor. The model must have an intercept: attempts to remove one will lead to a warning and be ignored. An offset may be used. See the documentation of `[formula](../../stats/html/formula)` for other details. |
| `data` | an optional data frame, list or environment in which to interpret the variables occurring in `formula`. |
| `weights` | optional case weights in fitting. Default to 1. |
| `start` | initial values for the parameters. This is in the format `c(coefficients, zeta)`: see the Values section. |
| `...` | additional arguments to be passed to `[optim](../../stats/html/optim)`, most often a `control` argument. |
| `subset` | expression saying which subset of the rows of the data should be used in the fit. All observations are included by default. |
| `na.action` | a function to filter missing data. |
| `contrasts` | a list of contrasts to be used for some or all of the factors appearing as variables in the model formula. |
| `Hess` | logical for whether the Hessian (the observed information matrix) should be returned. Use this if you intend to call `summary` or `vcov` on the fit. |
| `model` | logical for whether the model matrix should be returned. |
| `method` | logistic or probit or (complementary) log-log or cauchit (corresponding to a Cauchy latent variable). |
### Details
This model is what Agresti (2002) calls a *cumulative link* model. The basic interpretation is as a *coarsened* version of a latent variable *Y\_i* which has a logistic or normal or extreme-value or Cauchy distribution with scale parameter one and a linear model for the mean. The ordered factor which is observed is which bin *Y\_i* falls into with breakpoints
*zeta\_0 = -Inf < zeta\_1 < … < zeta\_K = Inf*
This leads to the model
*logit P(Y <= k | x) = zeta\_k - eta*
with *logit* replaced by *probit* for a normal latent variable, and *eta* being the linear predictor, a linear function of the explanatory variables (with no intercept). Note that it is quite common for other software to use the opposite sign for *eta* (and hence the coefficients `beta`).
In the logistic case, the left-hand side of the last display is the log odds of category *k* or less, and since these are log odds which differ only by a constant for different *k*, the odds are proportional. Hence the term *proportional odds logistic regression*.
The log-log and complementary log-log links are the increasing functions *F^-1(p) = -log(-log(p))* and *F^-1(p) = log(-log(1-p))*; some call the first the ‘negative log-log’ link. These correspond to a latent variable with the extreme-value distribution for the maximum and minimum respectively.
A *proportional hazards* model for grouped survival times can be obtained by using the complementary log-log link with grouping ordered by increasing times.
`[predict](../../stats/html/predict)`, `[summary](../../base/html/summary)`, `[vcov](../../stats/html/vcov)`, `[anova](../../stats/html/anova)`, `[model.frame](../../stats/html/model.frame)` and an `extractAIC` method for use with `[stepAIC](stepaic)` (and `[step](../../stats/html/step)`). There are also `[profile](../../stats/html/profile)` and `[confint](../../stats/html/confint)` methods.
### Value
A object of class `"polr"`. This has components
| | |
| --- | --- |
| `coefficients` | the coefficients of the linear predictor, which has no intercept. |
| `zeta` | the intercepts for the class boundaries. |
| `deviance` | the residual deviance. |
| `fitted.values` | a matrix, with a column for each level of the response. |
| `lev` | the names of the response levels. |
| `terms` | the `terms` structure describing the model. |
| `df.residual` | the number of residual degrees of freedoms, calculated using the weights. |
| `edf` | the (effective) number of degrees of freedom used by the model |
| `n, nobs` | the (effective) number of observations, calculated using the weights. (`nobs` is for use by `[stepAIC](stepaic)`. |
| `call` | the matched call. |
| `method` | the matched method used. |
| `convergence` | the convergence code returned by `optim`. |
| `niter` | the number of function and gradient evaluations used by `optim`. |
| `lp` | the linear predictor (including any offset). |
| `Hessian` | (if `Hess` is true). Note that this is a numerical approximation derived from the optimization proces. |
| `model` | (if `model` is true). |
### Note
The `[vcov](../../stats/html/vcov)` method uses the approximate Hessian: for reliable results the model matrix should be sensibly scaled with all columns having range the order of one.
Prior to version 7.3-32, `method = "cloglog"` confusingly gave the log-log link, implicitly assuming the first response level was the ‘best’.
### References
Agresti, A. (2002) *Categorical Data.* Second edition. Wiley.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`[optim](../../stats/html/optim)`, `[glm](../../stats/html/glm)`, `[multinom](../../nnet/html/multinom)`.
### Examples
```
options(contrasts = c("contr.treatment", "contr.poly"))
house.plr <- polr(Sat ~ Infl + Type + Cont, weights = Freq, data = housing)
house.plr
summary(house.plr, digits = 3)
## slightly worse fit from
summary(update(house.plr, method = "probit", Hess = TRUE), digits = 3)
## although it is not really appropriate, can fit
summary(update(house.plr, method = "loglog", Hess = TRUE), digits = 3)
summary(update(house.plr, method = "cloglog", Hess = TRUE), digits = 3)
predict(house.plr, housing, type = "p")
addterm(house.plr, ~.^2, test = "Chisq")
house.plr2 <- stepAIC(house.plr, ~.^2)
house.plr2$anova
anova(house.plr, house.plr2)
house.plr <- update(house.plr, Hess=TRUE)
pr <- profile(house.plr)
confint(pr)
plot(pr)
pairs(pr)
```
r None
`oats` Data from an Oats Field Trial
-------------------------------------
### Description
The yield of oats from a split-plot field trial using three varieties and four levels of manurial treatment. The experiment was laid out in 6 blocks of 3 main plots, each split into 4 sub-plots. The varieties were applied to the main plots and the manurial treatments to the sub-plots.
### Usage
```
oats
```
### Format
This data frame contains the following columns:
`B`
Blocks, levels I, II, III, IV, V and VI.
`V`
Varieties, 3 levels.
`N`
Nitrogen (manurial) treatment, levels 0.0cwt, 0.2cwt, 0.4cwt and 0.6cwt, showing the application in cwt/acre.
`Y`
Yields in 1/4lbs per sub-plot, each of area 1/80 acre.
### Source
Yates, F. (1935) Complex experiments, *Journal of the Royal Statistical Society Suppl.* **2**, 181–247.
Also given in Yates, F. (1970) *Experimental design: Selected papers of Frank Yates, C.B.E, F.R.S.* London: Griffin.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### Examples
```
oats$Nf <- ordered(oats$N, levels = sort(levels(oats$N)))
oats.aov <- aov(Y ~ Nf*V + Error(B/V), data = oats, qr = TRUE)
## IGNORE_RDIFF_BEGIN
summary(oats.aov)
summary(oats.aov, split = list(Nf=list(L=1, Dev=2:3)))
## IGNORE_RDIFF_END
par(mfrow = c(1,2), pty = "s")
plot(fitted(oats.aov[[4]]), studres(oats.aov[[4]]))
abline(h = 0, lty = 2)
oats.pr <- proj(oats.aov)
qqnorm(oats.pr[[4]][,"Residuals"], ylab = "Stratum 4 residuals")
qqline(oats.pr[[4]][,"Residuals"])
par(mfrow = c(1,1), pty = "m")
oats.aov2 <- aov(Y ~ N + V + Error(B/V), data = oats, qr = TRUE)
model.tables(oats.aov2, type = "means", se = TRUE)
```
r None
`UScrime` The Effect of Punishment Regimes on Crime Rates
----------------------------------------------------------
### Description
Criminologists are interested in the effect of punishment regimes on crime rates. This has been studied using aggregate data on 47 states of the USA for 1960 given in this data frame. The variables seem to have been re-scaled to convenient numbers.
### Usage
```
UScrime
```
### Format
This data frame contains the following columns:
`M`
percentage of males aged 14–24.
`So`
indicator variable for a Southern state.
`Ed`
mean years of schooling.
`Po1`
police expenditure in 1960.
`Po2`
police expenditure in 1959.
`LF`
labour force participation rate.
`M.F`
number of males per 1000 females.
`Pop`
state population.
`NW`
number of non-whites per 1000 people.
`U1`
unemployment rate of urban males 14–24.
`U2`
unemployment rate of urban males 35–39.
`GDP`
gross domestic product per head.
`Ineq`
income inequality.
`Prob`
probability of imprisonment.
`Time`
average time served in state prisons.
`y`
rate of crimes in a particular category per head of population.
### Source
Ehrlich, I. (1973) Participation in illegitimate activities: a theoretical and empirical investigation. *Journal of Political Economy*, **81**, 521–565.
Vandaele, W. (1978) Participation in illegitimate activities: Ehrlich revisited. In *Deterrence and Incapacitation*, eds A. Blumstein, J. Cohen and D. Nagin, pp. 270–335. US National Academy of Sciences.
### References
Venables, W. N. and Ripley, B. D. (1999) *Modern Applied Statistics with S-PLUS.* Third Edition. Springer.
r None
`ldahist` Histograms or Density Plots of Multiple Groups
---------------------------------------------------------
### Description
Plot histograms or density plots of data on a single Fisher linear discriminant.
### Usage
```
ldahist(data, g, nbins = 25, h, x0 = - h/1000, breaks,
xlim = range(breaks), ymax = 0, width,
type = c("histogram", "density", "both"),
sep = (type != "density"),
col = 5, xlab = deparse(substitute(data)), bty = "n", ...)
```
### Arguments
| | |
| --- | --- |
| `data` | vector of data. Missing values (`NA`s) are allowed and omitted. |
| `g` | factor or vector giving groups, of the same length as `data`. |
| `nbins` | Suggested number of bins to cover the whole range of the data. |
| `h` | The bin width (takes precedence over `nbins`). |
| `x0` | Shift for the bins - the breaks are at `x0 + h * (..., -1, 0, 1, ...)` |
| `breaks` | The set of breakpoints to be used. (Usually omitted, takes precedence over `h` and `nbins`). |
| `xlim` | The limits for the x-axis. |
| `ymax` | The upper limit for the y-axis. |
| `width` | Bandwidth for density estimates. If missing, the Sheather-Jones selector is used for each group separately. |
| `type` | Type of plot. |
| `sep` | Whether there is a separate plot for each group, or one combined plot. |
| `col` | The colour number for the bar fill. |
| `xlab` | label for the plot x-axis. By default, this will be the name of `data`. |
| `bty` | The box type for the plot - defaults to none. |
| `...` | additional arguments to `polygon`. |
### Side Effects
Histogram and/or density plots are plotted on the current device.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<plot.lda>`.
r None
`Skye` AFM Compositions of Aphyric Skye Lavas
----------------------------------------------
### Description
The `Skye` data frame has 23 rows and 3 columns.
### Usage
```
Skye
```
### Format
This data frame contains the following columns:
`A`
Percentage of sodium and potassium oxides.
`F`
Percentage of iron oxide.
`M`
Percentage of magnesium oxide.
### Source
R. N. Thompson, J. Esson and A. C. Duncan (1972) Major element chemical variation in the Eocene lavas of the Isle of Skye. *J. Petrology*, **13**, 219–253.
### References
J. Aitchison (1986) *The Statistical Analysis of Compositional Data.* Chapman and Hall, p.360.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### Examples
```
# ternary() is from the on-line answers.
ternary <- function(X, pch = par("pch"), lcex = 1,
add = FALSE, ord = 1:3, ...)
{
X <- as.matrix(X)
if(any(X < 0)) stop("X must be non-negative")
s <- drop(X %*% rep(1, ncol(X)))
if(any(s<=0)) stop("each row of X must have a positive sum")
if(max(abs(s-1)) > 1e-6) {
warning("row(s) of X will be rescaled")
X <- X / s
}
X <- X[, ord]
s3 <- sqrt(1/3)
if(!add)
{
oldpty <- par("pty")
on.exit(par(pty=oldpty))
par(pty="s")
plot(c(-s3, s3), c(0.5-s3, 0.5+s3), type="n", axes=FALSE,
xlab="", ylab="")
polygon(c(0, -s3, s3), c(1, 0, 0), density=0)
lab <- NULL
if(!is.null(dn <- dimnames(X))) lab <- dn[[2]]
if(length(lab) < 3) lab <- as.character(1:3)
eps <- 0.05 * lcex
text(c(0, s3+eps*0.7, -s3-eps*0.7),
c(1+eps, -0.1*eps, -0.1*eps), lab, cex=lcex)
}
points((X[,2] - X[,3])*s3, X[,1], ...)
}
ternary(Skye/100, ord=c(1,3,2))
```
r None
`cement` Heat Evolved by Setting Cements
-----------------------------------------
### Description
Experiment on the heat evolved in the setting of each of 13 cements.
### Usage
```
cement
```
### Format
`x1, x2, x3, x4`
Proportions (%) of active ingredients.
`y`
heat evolved in cals/gm.
### Details
Thirteen samples of Portland cement were set. For each sample, the percentages of the four main chemical ingredients was accurately measured. While the cement was setting the amount of heat evolved was also measured.
### Source
Woods, H., Steinour, H.H. and Starke, H.R. (1932) Effect of composition of Portland cement on heat evolved during hardening. *Industrial Engineering and Chemistry*, **24**, 1207–1214.
### References
Hald, A. (1957) *Statistical Theory with Engineering Applications.* Wiley, New York.
### Examples
```
lm(y ~ x1 + x2 + x3 + x4, cement)
```
r None
`glmmPQL` Fit Generalized Linear Mixed Models via PQL
------------------------------------------------------
### Description
Fit a GLMM model with multivariate normal random effects, using Penalized Quasi-Likelihood.
### Usage
```
glmmPQL(fixed, random, family, data, correlation, weights,
control, niter = 10, verbose = TRUE, ...)
```
### Arguments
| | |
| --- | --- |
| `fixed` | a two-sided linear formula giving fixed-effects part of the model. |
| `random` | a formula or list of formulae describing the random effects. |
| `family` | a GLM family. |
| `data` | an optional data frame, list or environment used as the first place to find variables in the formulae, `weights` and if present in `...`, `subset`. |
| `correlation` | an optional correlation structure. |
| `weights` | optional case weights as in `glm`. |
| `control` | an optional argument to be passed to `lme`. |
| `niter` | maximum number of iterations. |
| `verbose` | logical: print out record of iterations? |
| `...` | Further arguments for `lme`. |
### Details
`glmmPQL` works by repeated calls to `[lme](../../nlme/html/lme)`, so package `nlme` will be loaded at first use if necessary.
### Value
A object of class `"lme"`: see `[lmeObject](../../nlme/html/lmeobject)`.
### References
Schall, R. (1991) Estimation in generalized linear models with random effects. *Biometrika* **78**, 719–727.
Breslow, N. E. and Clayton, D. G. (1993) Approximate inference in generalized linear mixed models. *Journal of the American Statistical Association* **88**, 9–25.
Wolfinger, R. and O'Connell, M. (1993) Generalized linear mixed models: a pseudo-likelihood approach. *Journal of Statistical Computation and Simulation* **48**, 233–243.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`[lme](../../nlme/html/lme)`
### Examples
```
library(nlme) # will be loaded automatically if omitted
summary(glmmPQL(y ~ trt + I(week > 2), random = ~ 1 | ID,
family = binomial, data = bacteria))
```
| programming_docs |
r None
`loglm1` Fit Log-Linear Models by Iterative Proportional Scaling – Internal function
-------------------------------------------------------------------------------------
### Description
`loglm1` is an internal function used by `<loglm>`. It is a generic function dispatching on the `data` argument.
### Usage
```
loglm1(formula, data, ...)
## S3 method for class 'xtabs'
loglm1(formula, data, ...)
## S3 method for class 'data.frame'
loglm1(formula, data, ...)
## Default S3 method:
loglm1(formula, data, start = rep(1, length(data)), fitted = FALSE,
keep.frequencies = fitted, param = TRUE, eps = 1/10,
iter = 40, print = FALSE, ...)
```
### Arguments
| | |
| --- | --- |
| `formula` | A linear model formula specifying the log-linear model. See `<loglm>` for its interpretation. |
| `data` | Numeric array or data frame (or list or environment). In the first case it specifies the array of frequencies; in then second it provides the data frame from which the variables occurring in the formula are preferentially obtained in the usual way. This argument may also be the result of a call to `[xtabs](../../stats/html/xtabs)`. |
| `start, param, eps, iter, print` | Arguments passed to `[loglin](../../stats/html/loglin)`. |
| `fitted` | logical: should the fitted values be returned? |
| `keep.frequencies` | If `TRUE` specifies that the (possibly constructed) array of frequencies is to be retained as part of the fitted model object. The default action is to use the same value as that used for `fitted`. |
| `...` | arguments passed to the default method. |
### Value
An object of class `"loglm"`.
### See Also
`<loglm>`, `[loglin](../../stats/html/loglin)`
r None
`lda` Linear Discriminant Analysis
-----------------------------------
### Description
Linear discriminant analysis.
### Usage
```
lda(x, ...)
## S3 method for class 'formula'
lda(formula, data, ..., subset, na.action)
## Default S3 method:
lda(x, grouping, prior = proportions, tol = 1.0e-4,
method, CV = FALSE, nu, ...)
## S3 method for class 'data.frame'
lda(x, ...)
## S3 method for class 'matrix'
lda(x, grouping, ..., subset, na.action)
```
### Arguments
| | |
| --- | --- |
| `formula` | A formula of the form `groups ~ x1 + x2 + ...` That is, the response is the grouping factor and the right hand side specifies the (non-factor) discriminators. |
| `data` | An optional data frame, list or environment from which variables specified in `formula` are preferentially to be taken. |
| `x` | (required if no formula is given as the principal argument.) a matrix or data frame or Matrix containing the explanatory variables. |
| `grouping` | (required if no formula principal argument is given.) a factor specifying the class for each observation. |
| `prior` | the prior probabilities of class membership. If unspecified, the class proportions for the training set are used. If present, the probabilities should be specified in the order of the factor levels. |
| `tol` | A tolerance to decide if a matrix is singular; it will reject variables and linear combinations of unit-variance variables whose variance is less than `tol^2`. |
| `subset` | An index vector specifying the cases to be used in the training sample. (NOTE: If given, this argument must be named.) |
| `na.action` | A function to specify the action to be taken if `NA`s are found. The default action is for the procedure to fail. An alternative is `na.omit`, which leads to rejection of cases with missing values on any required variable. (NOTE: If given, this argument must be named.) |
| `method` | `"moment"` for standard estimators of the mean and variance, `"mle"` for MLEs, `"mve"` to use `[cov.mve](cov.rob)`, or `"t"` for robust estimates based on a *t* distribution. |
| `CV` | If true, returns results (classes and posterior probabilities) for leave-one-out cross-validation. Note that if the prior is estimated, the proportions in the whole dataset are used. |
| `nu` | degrees of freedom for `method = "t"`. |
| `...` | arguments passed to or from other methods. |
### Details
The function tries hard to detect if the within-class covariance matrix is singular. If any variable has within-group variance less than `tol^2` it will stop and report the variable as constant. This could result from poor scaling of the problem, but is more likely to result from constant variables.
Specifying the `prior` will affect the classification unless over-ridden in `predict.lda`. Unlike in most statistical packages, it will also affect the rotation of the linear discriminants within their space, as a weighted between-groups covariance matrix is used. Thus the first few linear discriminants emphasize the differences between groups with the weights given by the prior, which may differ from their prevalence in the dataset.
If one or more groups is missing in the supplied data, they are dropped with a warning, but the classifications produced are with respect to the original set of levels.
### Value
If `CV = TRUE` the return value is a list with components `class`, the MAP classification (a factor), and `posterior`, posterior probabilities for the classes.
Otherwise it is an object of class `"lda"` containing the following components:
| | |
| --- | --- |
| `prior` | the prior probabilities used. |
| `means` | the group means. |
| `scaling` | a matrix which transforms observations to discriminant functions, normalized so that within groups covariance matrix is spherical. |
| `svd` | the singular values, which give the ratio of the between- and within-group standard deviations on the linear discriminant variables. Their squares are the canonical F-statistics. |
| `N` | The number of observations used. |
| `call` | The (matched) function call. |
### Note
This function may be called giving either a formula and optional data frame, or a matrix and grouping factor as the first two arguments. All other arguments are optional, but `subset=` and `na.action=`, if required, must be fully named.
If a formula is given as the principal argument the object may be modified using `update()` in the usual way.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
Ripley, B. D. (1996) *Pattern Recognition and Neural Networks*. Cambridge University Press.
### See Also
`<predict.lda>`, `<qda>`, `<predict.qda>`
### Examples
```
Iris <- data.frame(rbind(iris3[,,1], iris3[,,2], iris3[,,3]),
Sp = rep(c("s","c","v"), rep(50,3)))
train <- sample(1:150, 75)
table(Iris$Sp[train])
## your answer may differ
## c s v
## 22 23 30
z <- lda(Sp ~ ., Iris, prior = c(1,1,1)/3, subset = train)
predict(z, Iris[-train, ])$class
## [1] s s s s s s s s s s s s s s s s s s s s s s s s s s s c c c
## [31] c c c c c c c v c c c c v c c c c c c c c c c c c v v v v v
## [61] v v v v v v v v v v v v v v v
(z1 <- update(z, . ~ . - Petal.W.))
```
r None
`negative.binomial` Family function for Negative Binomial GLMs
---------------------------------------------------------------
### Description
Specifies the information required to fit a Negative Binomial generalized linear model, with known `theta` parameter, using `glm()`.
### Usage
```
negative.binomial(theta = stop("'theta' must be specified"), link = "log")
```
### Arguments
| | |
| --- | --- |
| `theta` | The known value of the additional parameter, `theta`. |
| `link` | The link function, as a character string, name or one-element character vector specifying one of `log`, `sqrt` or `identity`, or an object of class `"[link-glm](../../stats/html/family)"`. |
### Value
An object of class `"family"`, a list of functions and expressions needed by `glm()` to fit a Negative Binomial generalized linear model.
### References
Venables, W. N. and Ripley, B. D. (1999) *Modern Applied Statistics with S-PLUS.* Third Edition. Springer.
### See Also
`<glm.nb>`, `<anova.negbin>`, `<summary.negbin>`
### Examples
```
# Fitting a Negative Binomial model to the quine data
# with theta = 2 assumed known.
#
glm(Days ~ .^4, family = negative.binomial(2), data = quine)
```
r None
`birthwt` Risk Factors Associated with Low Infant Birth Weight
---------------------------------------------------------------
### Description
The `birthwt` data frame has 189 rows and 10 columns. The data were collected at Baystate Medical Center, Springfield, Mass during 1986.
### Usage
```
birthwt
```
### Format
This data frame contains the following columns:
`low`
indicator of birth weight less than 2.5 kg.
`age`
mother's age in years.
`lwt`
mother's weight in pounds at last menstrual period.
`race`
mother's race (`1` = white, `2` = black, `3` = other).
`smoke`
smoking status during pregnancy.
`ptl`
number of previous premature labours.
`ht`
history of hypertension.
`ui`
presence of uterine irritability.
`ftv`
number of physician visits during the first trimester.
`bwt`
birth weight in grams.
### Source
Hosmer, D.W. and Lemeshow, S. (1989) *Applied Logistic Regression.* New York: Wiley
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### Examples
```
bwt <- with(birthwt, {
race <- factor(race, labels = c("white", "black", "other"))
ptd <- factor(ptl > 0)
ftv <- factor(ftv)
levels(ftv)[-(1:2)] <- "2+"
data.frame(low = factor(low), age, lwt, race, smoke = (smoke > 0),
ptd, ht = (ht > 0), ui = (ui > 0), ftv)
})
options(contrasts = c("contr.treatment", "contr.poly"))
glm(low ~ ., binomial, bwt)
```
r None
`shrimp` Percentage of Shrimp in Shrimp Cocktail
-------------------------------------------------
### Description
A numeric vector with 18 determinations by different laboratories of the amount (percentage of the declared total weight) of shrimp in shrimp cocktail.
### Usage
```
shrimp
```
### Source
F. J. King and J. J. Ryan (1976) Collaborative study of the determination of the amount of shrimp in shrimp cocktail. *J. Off. Anal. Chem.* **59**, 644–649.
R. G. Staudte and S. J. Sheather (1990) *Robust Estimation and Testing.* Wiley.
r None
`gamma.dispersion` Calculate the MLE of the Gamma Dispersion Parameter in a GLM Fit
------------------------------------------------------------------------------------
### Description
A front end to `gamma.shape` for convenience. Finds the reciprocal of the estimate of the shape parameter only.
### Usage
```
gamma.dispersion(object, ...)
```
### Arguments
| | |
| --- | --- |
| `object` | Fitted model object giving the gamma fit. |
| `...` | Additional arguments passed on to `gamma.shape`. |
### Value
The MLE of the dispersion parameter of the gamma distribution.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<gamma.shape.glm>`, including the example on its help page.
r None
`Rubber` Accelerated Testing of Tyre Rubber
--------------------------------------------
### Description
Data frame from accelerated testing of tyre rubber.
### Usage
```
Rubber
```
### Format
`loss`
the abrasion loss in gm/hr.
`hard`
the hardness in Shore units.
`tens`
tensile strength in kg/sq m.
### Source
O.L. Davies (1947) *Statistical Methods in Research and Production.* Oliver and Boyd, Table 6.1 p. 119.
O.L. Davies and P.L. Goldsmith (1972) *Statistical Methods in Research and Production.* 4th edition, Longmans, Table 8.1 p. 239.
### References
Venables, W. N. and Ripley, B. D. (1999) *Modern Applied Statistics with S-PLUS.* Third Edition. Springer.
r None
`menarche` Age of Menarche in Warsaw
-------------------------------------
### Description
Proportions of female children at various ages during adolescence who have reached menarche.
### Usage
```
menarche
```
### Format
This data frame contains the following columns:
`Age`
Average age of the group. (The groups are reasonably age homogeneous.)
`Total`
Total number of children in the group.
`Menarche`
Number who have reached menarche.
### Source
Milicer, H. and Szczotka, F. (1966) Age at Menarche in Warsaw girls in 1965. *Human Biology* **38**, 199–203.
The data are also given in
Aranda-Ordaz, F.J. (1981) On two families of transformations to additivity for binary response data. *Biometrika* **68**, 357–363.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### Examples
```
mprob <- glm(cbind(Menarche, Total - Menarche) ~ Age,
binomial(link = probit), data = menarche)
```
r None
`synth.tr` Synthetic Classification Problem
--------------------------------------------
### Description
The `synth.tr` data frame has 250 rows and 3 columns. The `synth.te` data frame has 100 rows and 3 columns. It is intended that `synth.tr` be used from training and `synth.te` for testing.
### Usage
```
synth.tr
synth.te
```
### Format
These data frames contains the following columns:
`xs`
x-coordinate
`ys`
y-coordinate
`yc`
class, coded as 0 or 1.
### Source
Ripley, B.D. (1994) Neural networks and related methods for classification (with discussion). *Journal of the Royal Statistical Society series B* **56**, 409–456.
Ripley, B.D. (1996) *Pattern Recognition and Neural Networks.* Cambridge: Cambridge University Press.
r None
`cats` Anatomical Data from Domestic Cats
------------------------------------------
### Description
The heart and body weights of samples of male and female cats used for *digitalis* experiments. The cats were all adult, over 2 kg body weight.
### Usage
```
cats
```
### Format
This data frame contains the following columns:
`Sex`
sex: Factor with levels `"F"` and `"M"`.
`Bwt`
body weight in kg.
`Hwt`
heart weight in g.
### Source
R. A. Fisher (1947) The analysis of covariance method for the relation between a part and the whole, *Biometrics* **3**, 65–68.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
r None
`cabbages` Data from a cabbage field trial
-------------------------------------------
### Description
The `cabbages` data set has 60 observations and 4 variables
### Usage
```
cabbages
```
### Format
This data frame contains the following columns:
`Cult`
Factor giving the cultivar of the cabbage, two levels: `c39` and `c52`.
`Date`
Factor specifying one of three planting dates: `d16`, `d20` or `d21`.
`HeadWt`
Weight of the cabbage head, presumably in kg.
`VitC`
Ascorbic acid content, in undefined units.
### Source
Rawlings, J. O. (1988) *Applied Regression Analysis: A Research Tool.* Wadsworth and Brooks/Cole. Example 8.4, page 219. (Rawlings cites the original source as the files of the late Dr Gertrude M Cox.)
### References
Venables, W. N. and Ripley, B. D. (1999) *Modern Applied Statistics with S-PLUS.* Third Edition. Springer.
r None
`fractions` Rational Approximation
-----------------------------------
### Description
Find rational approximations to the components of a real numeric object using a standard continued fraction method.
### Usage
```
fractions(x, cycles = 10, max.denominator = 2000, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | Any object of mode numeric. Missing values are now allowed. |
| `cycles` | The maximum number of steps to be used in the continued fraction approximation process. |
| `max.denominator` | An early termination criterion. If any partial denominator exceeds `max.denominator` the continued fraction stops at that point. |
| `...` | arguments passed to or from other methods. |
### Details
Each component is first expanded in a continued fraction of the form
`x = floor(x) + 1/(p1 + 1/(p2 + ...)))`
where `p1`, `p2`, ... are positive integers, terminating either at `cycles` terms or when a `pj > max.denominator`. The continued fraction is then re-arranged to retrieve the numerator and denominator as integers.
The numerators and denominators are then combined into a character vector that becomes the `"fracs"` attribute and used in printed representations.
Arithmetic operations on `"fractions"` objects have full floating point accuracy, but the character representation printed out may not.
### Value
An object of class `"fractions"`. A structure with `.Data` component the same as the input numeric `x`, but with the rational approximations held as a character vector attribute, `"fracs"`. Arithmetic operations on `"fractions"` objects are possible.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth Edition. Springer.
### See Also
`<rational>`
### Examples
```
X <- matrix(runif(25), 5, 5)
zapsmall(solve(X, X/5)) # print near-zeroes as zero
fractions(solve(X, X/5))
fractions(solve(X, X/5)) + 1
```
r None
`plot.lda` Plot Method for Class 'lda'
---------------------------------------
### Description
Plots a set of data on one, two or more linear discriminants.
### Usage
```
## S3 method for class 'lda'
plot(x, panel = panel.lda, ..., cex = 0.7, dimen,
abbrev = FALSE, xlab = "LD1", ylab = "LD2")
```
### Arguments
| | |
| --- | --- |
| `x` | An object of class `"lda"`. |
| `panel` | the panel function used to plot the data. |
| `...` | additional arguments to `pairs`, `ldahist` or `eqscplot`. |
| `cex` | graphics parameter `cex` for labels on plots. |
| `dimen` | The number of linear discriminants to be used for the plot; if this exceeds the number determined by `x` the smaller value is used. |
| `abbrev` | whether the group labels are abbreviated on the plots. If `abbrev > 0` this gives `minlength` in the call to `abbreviate`. |
| `xlab` | label for the x axis |
| `ylab` | label for the y axis |
### Details
This function is a method for the generic function `plot()` for class `"lda"`. It can be invoked by calling `plot(x)` for an object `x` of the appropriate class, or directly by calling `plot.lda(x)` regardless of the class of the object.
The behaviour is determined by the value of `dimen`. For `dimen > 2`, a `pairs` plot is used. For `dimen = 2`, an equiscaled scatter plot is drawn. For `dimen = 1`, a set of histograms or density plots are drawn. Use argument `type` to match `"histogram"` or `"density"` or `"both"`.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<pairs.lda>`, `<ldahist>`, `<lda>`, `<predict.lda>`
r None
`whiteside` House Insulation: Whiteside's Data
-----------------------------------------------
### Description
Mr Derek Whiteside of the UK Building Research Station recorded the weekly gas consumption and average external temperature at his own house in south-east England for two heating seasons, one of 26 weeks before, and one of 30 weeks after cavity-wall insulation was installed. The object of the exercise was to assess the effect of the insulation on gas consumption.
### Usage
```
whiteside
```
### Format
The `whiteside` data frame has 56 rows and 3 columns.:
`Insul`
A factor, before or after insulation.
`Temp`
Purportedly the average outside temperature in degrees Celsius. (These values is far too low for any 56-week period in the 1960s in South-East England. It might be the weekly average of daily minima.)
`Gas`
The weekly gas consumption in 1000s of cubic feet.
### Source
A data set collected in the 1960s by Mr Derek Whiteside of the UK Building Research Station. Reported by
Hand, D. J., Daly, F., McConway, K., Lunn, D. and Ostrowski, E. eds (1993) *A Handbook of Small Data Sets.* Chapman & Hall, p. 69.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### Examples
```
require(lattice)
xyplot(Gas ~ Temp | Insul, whiteside, panel =
function(x, y, ...) {
panel.xyplot(x, y, ...)
panel.lmline(x, y, ...)
}, xlab = "Average external temperature (deg. C)",
ylab = "Gas consumption (1000 cubic feet)", aspect = "xy",
strip = function(...) strip.default(..., style = 1))
gasB <- lm(Gas ~ Temp, whiteside, subset = Insul=="Before")
gasA <- update(gasB, subset = Insul=="After")
summary(gasB)
summary(gasA)
gasBA <- lm(Gas ~ Insul/Temp - 1, whiteside)
summary(gasBA)
gasQ <- lm(Gas ~ Insul/(Temp + I(Temp^2)) - 1, whiteside)
coef(summary(gasQ))
gasPR <- lm(Gas ~ Insul + Temp, whiteside)
anova(gasPR, gasBA)
options(contrasts = c("contr.treatment", "contr.poly"))
gasBA1 <- lm(Gas ~ Insul*Temp, whiteside)
coef(summary(gasBA1))
```
| programming_docs |
r None
`Sitka89` Growth Curves for Sitka Spruce Trees in 1989
-------------------------------------------------------
### Description
The `Sitka89` data frame has 632 rows and 4 columns. It gives repeated measurements on the log-size of 79 Sitka spruce trees, 54 of which were grown in ozone-enriched chambers and 25 were controls. The size was measured eight times in 1989, at roughly monthly intervals.
### Usage
```
Sitka89
```
### Format
This data frame contains the following columns:
`size`
measured size (height times diameter squared) of tree, on log scale.
`Time`
time of measurement in days since 1 January 1988.
`tree`
number of tree.
`treat`
either `"ozone"` for an ozone-enriched chamber or `"control"`.
### Source
P. J. Diggle, K.-Y. Liang and S. L. Zeger (1994) *Analysis of Longitudinal Data.* Clarendon Press, Oxford
### See Also
`[Sitka](sitka)`
r None
`predict.glmmPQL` Predict Method for glmmPQL Fits
--------------------------------------------------
### Description
Obtains predictions from a fitted generalized linear model with random effects.
### Usage
```
## S3 method for class 'glmmPQL'
predict(object, newdata = NULL, type = c("link", "response"),
level, na.action = na.pass, ...)
```
### Arguments
| | |
| --- | --- |
| `object` | a fitted object of class inheriting from `"glmmPQL"`. |
| `newdata` | optionally, a data frame in which to look for variables with which to predict. |
| `type` | the type of prediction required. The default is on the scale of the linear predictors; the alternative `"response"` is on the scale of the response variable. Thus for a default binomial model the default predictions are of log-odds (probabilities on logit scale) and `type = "response"` gives the predicted probabilities. |
| `level` | an optional integer vector giving the level(s) of grouping to be used in obtaining the predictions. Level values increase from outermost to innermost grouping, with level zero corresponding to the population predictions. Defaults to the highest or innermost level of grouping. |
| `na.action` | function determining what should be done with missing values in `newdata`. The default is to predict `NA`. |
| `...` | further arguments passed to or from other methods. |
### Value
If `level` is a single integer, a vector otherwise a data frame.
### See Also
`[glmmPQL](glmmpql)`, `[predict.lme](../../nlme/html/predict.lme)`.
### Examples
```
fit <- glmmPQL(y ~ trt + I(week > 2), random = ~1 | ID,
family = binomial, data = bacteria)
predict(fit, bacteria, level = 0, type="response")
predict(fit, bacteria, level = 1, type="response")
```
r None
`qda` Quadratic Discriminant Analysis
--------------------------------------
### Description
Quadratic discriminant analysis.
### Usage
```
qda(x, ...)
## S3 method for class 'formula'
qda(formula, data, ..., subset, na.action)
## Default S3 method:
qda(x, grouping, prior = proportions,
method, CV = FALSE, nu, ...)
## S3 method for class 'data.frame'
qda(x, ...)
## S3 method for class 'matrix'
qda(x, grouping, ..., subset, na.action)
```
### Arguments
| | |
| --- | --- |
| `formula` | A formula of the form `groups ~ x1 + x2 + ...` That is, the response is the grouping factor and the right hand side specifies the (non-factor) discriminators. |
| `data` | An optional data frame, list or environment from which variables specified in `formula` are preferentially to be taken. |
| `x` | (required if no formula is given as the principal argument.) a matrix or data frame or Matrix containing the explanatory variables. |
| `grouping` | (required if no formula principal argument is given.) a factor specifying the class for each observation. |
| `prior` | the prior probabilities of class membership. If unspecified, the class proportions for the training set are used. If specified, the probabilities should be specified in the order of the factor levels. |
| `subset` | An index vector specifying the cases to be used in the training sample. (NOTE: If given, this argument must be named.) |
| `na.action` | A function to specify the action to be taken if `NA`s are found. The default action is for the procedure to fail. An alternative is na.omit, which leads to rejection of cases with missing values on any required variable. (NOTE: If given, this argument must be named.) |
| `method` | `"moment"` for standard estimators of the mean and variance, `"mle"` for MLEs, `"mve"` to use `cov.mve`, or `"t"` for robust estimates based on a t distribution. |
| `CV` | If true, returns results (classes and posterior probabilities) for leave-out-out cross-validation. Note that if the prior is estimated, the proportions in the whole dataset are used. |
| `nu` | degrees of freedom for `method = "t"`. |
| `...` | arguments passed to or from other methods. |
### Details
Uses a QR decomposition which will give an error message if the within-group variance is singular for any group.
### Value
an object of class `"qda"` containing the following components:
| | |
| --- | --- |
| `prior` | the prior probabilities used. |
| `means` | the group means. |
| `scaling` | for each group `i`, `scaling[,,i]` is an array which transforms observations so that within-groups covariance matrix is spherical. |
| `ldet` | a vector of half log determinants of the dispersion matrix. |
| `lev` | the levels of the grouping factor. |
| `terms` | (if formula is a formula) an object of mode expression and class term summarizing the formula. |
| `call` | the (matched) function call. |
unless `CV=TRUE`, when the return value is a list with components:
| | |
| --- | --- |
| `class` | The MAP classification (a factor) |
| `posterior` | posterior probabilities for the classes |
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
Ripley, B. D. (1996) *Pattern Recognition and Neural Networks*. Cambridge University Press.
### See Also
`<predict.qda>`, `<lda>`
### Examples
```
tr <- sample(1:50, 25)
train <- rbind(iris3[tr,,1], iris3[tr,,2], iris3[tr,,3])
test <- rbind(iris3[-tr,,1], iris3[-tr,,2], iris3[-tr,,3])
cl <- factor(c(rep("s",25), rep("c",25), rep("v",25)))
z <- qda(train, cl)
predict(z,test)$class
```
r None
`SP500` Returns of the Standard and Poors 500
----------------------------------------------
### Description
Returns of the Standard and Poors 500 Index in the 1990's
### Usage
```
SP500
```
### Format
A vector of returns of the Standard and Poors 500 index for all the trading days in 1990, 1991, ..., 1999.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
r None
`studres` Extract Studentized Residuals from a Linear Model
------------------------------------------------------------
### Description
The Studentized residuals. Like standardized residuals, these are normalized to unit variance, but the Studentized version is fitted ignoring the current data point. (They are sometimes called jackknifed residuals).
### Usage
```
studres(object)
```
### Arguments
| | |
| --- | --- |
| `object` | any object representing a linear model. |
### Value
The vector of appropriately transformed residuals.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`[residuals](../../stats/html/residuals)`, `<stdres>`
r None
`dropterm` Try All One-Term Deletions from a Model
---------------------------------------------------
### Description
Try fitting all models that differ from the current model by dropping a single term, maintaining marginality.
This function is generic; there exist methods for classes `lm` and `glm` and the default method will work for many other classes.
### Usage
```
dropterm (object, ...)
## Default S3 method:
dropterm(object, scope, scale = 0, test = c("none", "Chisq"),
k = 2, sorted = FALSE, trace = FALSE, ...)
## S3 method for class 'lm'
dropterm(object, scope, scale = 0, test = c("none", "Chisq", "F"),
k = 2, sorted = FALSE, ...)
## S3 method for class 'glm'
dropterm(object, scope, scale = 0, test = c("none", "Chisq", "F"),
k = 2, sorted = FALSE, trace = FALSE, ...)
```
### Arguments
| | |
| --- | --- |
| `object` | A object fitted by some model-fitting function. |
| `scope` | a formula giving terms which might be dropped. By default, the model formula. Only terms that can be dropped and maintain marginality are actually tried. |
| `scale` | used in the definition of the AIC statistic for selecting the models, currently only for `lm`, `aov` and `glm` models. Specifying `scale` asserts that the residual standard error or dispersion is known. |
| `test` | should the results include a test statistic relative to the original model? The F test is only appropriate for `lm` and `aov` models, and perhaps for some over-dispersed `glm` models. The Chisq test can be an exact test (`lm` models with known scale) or a likelihood-ratio test depending on the method. |
| `k` | the multiple of the number of degrees of freedom used for the penalty. Only `k = 2` gives the genuine AIC: `k = log(n)` is sometimes referred to as BIC or SBC. |
| `sorted` | should the results be sorted on the value of AIC? |
| `trace` | if `TRUE` additional information may be given on the fits as they are tried. |
| `...` | arguments passed to or from other methods. |
### Details
The definition of AIC is only up to an additive constant: when appropriate (`lm` models with specified scale) the constant is taken to be that used in Mallows' Cp statistic and the results are labelled accordingly.
### Value
A table of class `"anova"` containing at least columns for the change in degrees of freedom and AIC (or Cp) for the models. Some methods will give further information, for example sums of squares, deviances, log-likelihoods and test statistics.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<addterm>`, `[stepAIC](stepaic)`
### Examples
```
quine.hi <- aov(log(Days + 2.5) ~ .^4, quine)
quine.nxt <- update(quine.hi, . ~ . - Eth:Sex:Age:Lrn)
dropterm(quine.nxt, test= "F")
quine.stp <- stepAIC(quine.nxt,
scope = list(upper = ~Eth*Sex*Age*Lrn, lower = ~1),
trace = FALSE)
dropterm(quine.stp, test = "F")
quine.3 <- update(quine.stp, . ~ . - Eth:Age:Lrn)
dropterm(quine.3, test = "F")
quine.4 <- update(quine.3, . ~ . - Eth:Age)
dropterm(quine.4, test = "F")
quine.5 <- update(quine.4, . ~ . - Age:Lrn)
dropterm(quine.5, test = "F")
house.glm0 <- glm(Freq ~ Infl*Type*Cont + Sat, family=poisson,
data = housing)
house.glm1 <- update(house.glm0, . ~ . + Sat*(Infl+Type+Cont))
dropterm(house.glm1, test = "Chisq")
```
r None
`area` Adaptive Numerical Integration
--------------------------------------
### Description
Integrate a function of one variable over a finite range using a recursive adaptive method. This function is mainly for demonstration purposes.
### Usage
```
area(f, a, b, ..., fa = f(a, ...), fb = f(b, ...),
limit = 10, eps = 1e-05)
```
### Arguments
| | |
| --- | --- |
| `f` | The integrand as an `S` function object. The variable of integration must be the first argument. |
| `a` | Lower limit of integration. |
| `b` | Upper limit of integration. |
| `...` | Additional arguments needed by the integrand. |
| `fa` | Function value at the lower limit. |
| `fb` | Function value at the upper limit. |
| `limit` | Limit on the depth to which recursion is allowed to go. |
| `eps` | Error tolerance to control the process. |
### Details
The method divides the interval in two and compares the values given by Simpson's rule and the trapezium rule. If these are within eps of each other the Simpson's rule result is given, otherwise the process is applied separately to each half of the interval and the results added together.
### Value
The integral from `a` to `b` of `f(x)`.
### References
Venables, W. N. and Ripley, B. D. (1994) *Modern Applied Statistics with S-Plus.* Springer. pp. 105–110.
### Examples
```
area(sin, 0, pi) # integrate the sin function from 0 to pi.
```
r None
`Insurance` Numbers of Car Insurance claims
--------------------------------------------
### Description
The data given in data frame `Insurance` consist of the numbers of policyholders of an insurance company who were exposed to risk, and the numbers of car insurance claims made by those policyholders in the third quarter of 1973.
### Usage
```
Insurance
```
### Format
This data frame contains the following columns:
`District`
factor: district of residence of policyholder (1 to 4): 4 is major cities.
`Group`
an ordered factor: group of car with levels <1 litre, 1–1.5 litre, 1.5–2 litre, >2 litre.
`Age`
an ordered factor: the age of the insured in 4 groups labelled <25, 25–29, 30–35, >35.
`Holders`
numbers of policyholders.
`Claims`
numbers of claims
### Source
L. A. Baxter, S. M. Coutts and G. A. F. Ross (1980) Applications of linear models in motor insurance. *Proceedings of the 21st International Congress of Actuaries, Zurich* pp. 11–29.
M. Aitkin, D. Anderson, B. Francis and J. Hinde (1989) *Statistical Modelling in GLIM.* Oxford University Press.
### References
Venables, W. N. and Ripley, B. D. (1999) *Modern Applied Statistics with S-PLUS.* Third Edition. Springer.
### Examples
```
## main-effects fit as Poisson GLM with offset
glm(Claims ~ District + Group + Age + offset(log(Holders)),
data = Insurance, family = poisson)
# same via loglm
loglm(Claims ~ District + Group + Age + offset(log(Holders)),
data = Insurance)
```
r None
`profile.glm` Method for Profiling glm Objects
-----------------------------------------------
### Description
Investigates the profile log-likelihood function for a fitted model of class `"glm"`.
### Usage
```
## S3 method for class 'glm'
profile(fitted, which = 1:p, alpha = 0.01, maxsteps = 10,
del = zmax/5, trace = FALSE, ...)
```
### Arguments
| | |
| --- | --- |
| `fitted` | the original fitted model object. |
| `which` | the original model parameters which should be profiled. This can be a numeric or character vector. By default, all parameters are profiled. |
| `alpha` | highest significance level allowed for the profile t-statistics. |
| `maxsteps` | maximum number of points to be used for profiling each parameter. |
| `del` | suggested change on the scale of the profile t-statistics. Default value chosen to allow profiling at about 10 parameter values. |
| `trace` | logical: should the progress of profiling be reported? |
| `...` | further arguments passed to or from other methods. |
### Details
The profile t-statistic is defined as the square root of change in sum-of-squares divided by residual standard error with an appropriate sign.
### Value
A list of classes `"profile.glm"` and `"profile"` with an element for each parameter being profiled. The elements are data-frames with two variables
| | |
| --- | --- |
| `par.vals` | a matrix of parameter values for each fitted model. |
| `tau` | the profile t-statistics. |
### Author(s)
Originally, D. M. Bates and W. N. Venables. (For S in 1996.)
### See Also
`[glm](../../stats/html/glm)`, `[profile](../../stats/html/profile)`, `<plot.profile>`
### Examples
```
options(contrasts = c("contr.treatment", "contr.poly"))
ldose <- rep(0:5, 2)
numdead <- c(1, 4, 9, 13, 18, 20, 0, 2, 6, 10, 12, 16)
sex <- factor(rep(c("M", "F"), c(6, 6)))
SF <- cbind(numdead, numalive = 20 - numdead)
budworm.lg <- glm(SF ~ sex*ldose, family = binomial)
pr1 <- profile(budworm.lg)
plot(pr1)
pairs(pr1)
```
r None
`minn38` Minnesota High School Graduates of 1938
-------------------------------------------------
### Description
The Minnesota high school graduates of 1938 were classified according to four factors, described below. The `minn38` data frame has 168 rows and 5 columns.
### Usage
```
minn38
```
### Format
This data frame contains the following columns:
`hs`
high school rank: `"L"`, `"M"` and `"U"` for lower, middle and upper third.
`phs`
post high school status: Enrolled in college, (`"C"`), enrolled in non-collegiate school, (`"N"`), employed full-time, (`"E"`) and other, (`"O"`).
`fol`
father's occupational level, (seven levels, `"F1"`, `"F2"`, ..., `"F7"`).
`sex`
sex: factor with levels`"F"` or `"M"`.
`f`
frequency.
### Source
From R. L. Plackett, (1974) *The Analysis of Categorical Data.* London: Griffin
who quotes the data from
Hoyt, C. J., Krishnaiah, P. R. and Torrance, E. P. (1959) Analysis of complex contingency tables, *J. Exp. Ed.* **27**, 187–194.
r None
`Cushings` Diagnostic Tests on Patients with Cushing's Syndrome
----------------------------------------------------------------
### Description
Cushing's syndrome is a hypertensive disorder associated with over-secretion of cortisol by the adrenal gland. The observations are urinary excretion rates of two steroid metabolites.
### Usage
```
Cushings
```
### Format
The `Cushings` data frame has 27 rows and 3 columns:
`Tetrahydrocortisone`
urinary excretion rate (mg/24hr) of Tetrahydrocortisone.
`Pregnanetriol`
urinary excretion rate (mg/24hr) of Pregnanetriol.
`Type`
underlying type of syndrome, coded `a` (adenoma) , `b` (bilateral hyperplasia), `c` (carcinoma) or `u` for unknown.
### Source
J. Aitchison and I. R. Dunsmore (1975) *Statistical Prediction Analysis.* Cambridge University Press, Tables 11.1–3.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
r None
`pairs.lda` Produce Pairwise Scatterplots from an 'lda' Fit
------------------------------------------------------------
### Description
Pairwise scatterplot of the data on the linear discriminants.
### Usage
```
## S3 method for class 'lda'
pairs(x, labels = colnames(x), panel = panel.lda,
dimen, abbrev = FALSE, ..., cex=0.7, type = c("std", "trellis"))
```
### Arguments
| | |
| --- | --- |
| `x` | Object of class `"lda"`. |
| `labels` | vector of character strings for labelling the variables. |
| `panel` | panel function to plot the data in each panel. |
| `dimen` | The number of linear discriminants to be used for the plot; if this exceeds the number determined by `x` the smaller value is used. |
| `abbrev` | whether the group labels are abbreviated on the plots. If `abbrev > 0` this gives `minlength` in the call to `abbreviate`. |
| `...` | additional arguments for `pairs.default`. |
| `cex` | graphics parameter `cex` for labels on plots. |
| `type` | type of plot. The default is in the style of `[pairs.default](../../graphics/html/pairs)`; the style `"trellis"` uses the Trellis function `[splom](../../lattice/html/splom)`. |
### Details
This function is a method for the generic function `pairs()` for class `"lda"`. It can be invoked by calling `pairs(x)` for an object `x` of the appropriate class, or directly by calling `pairs.lda(x)` regardless of the class of the object.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`[pairs](../../graphics/html/pairs)`
r None
`GAGurine` Level of GAG in Urine of Children
---------------------------------------------
### Description
Data were collected on the concentration of a chemical GAG in the urine of 314 children aged from zero to seventeen years. The aim of the study was to produce a chart to help a paediatrican to assess if a child's GAG concentration is ‘normal’.
### Usage
```
GAGurine
```
### Format
This data frame contains the following columns:
`Age`
age of child in years.
`GAG`
concentration of GAG (the units have been lost).
### Source
Mrs Susan Prosser, Paediatrics Department, University of Oxford, via Department of Statistics Consulting Service.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
| programming_docs |
r None
`parcoord` Parallel Coordinates Plot
-------------------------------------
### Description
Parallel coordinates plot
### Usage
```
parcoord(x, col = 1, lty = 1, var.label = FALSE, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | a matrix or data frame who columns represent variables. Missing values are allowed. |
| `col` | A vector of colours, recycled as necessary for each observation. |
| `lty` | A vector of line types, recycled as necessary for each observation. |
| `var.label` | If `TRUE`, each variable's axis is labelled with maximum and minimum values. |
| `...` | Further graphics parameters which are passed to `matplot`. |
### Side Effects
a parallel coordinates plots is drawn.
### Author(s)
B. D. Ripley. Enhancements based on ideas and code by Fabian Scheipl.
### References
Wegman, E. J. (1990) Hyperdimensional data analysis using parallel coordinates. *Journal of the American Statistical Association* **85**, 664–675.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### Examples
```
parcoord(state.x77[, c(7, 4, 6, 2, 5, 3)])
ir <- rbind(iris3[,,1], iris3[,,2], iris3[,,3])
parcoord(log(ir)[, c(3, 4, 2, 1)], col = 1 + (0:149)%/%50)
```
r None
`mammals` Brain and Body Weights for 62 Species of Land Mammals
----------------------------------------------------------------
### Description
A data frame with average brain and body weights for 62 species of land mammals.
### Usage
```
mammals
```
### Format
`body`
body weight in kg.
`brain`
brain weight in g.
`name`
Common name of species. (Rock hyrax-a = *Heterohyrax brucci*, Rock hyrax-b = *Procavia habessinic.*.)
### Source
Weisberg, S. (1985) *Applied Linear Regression.* 2nd edition. Wiley, pp. 144–5.
Selected from: Allison, T. and Cicchetti, D. V. (1976) Sleep in mammals: ecological and constitutional correlates. *Science* **194**, 732–734.
### References
Venables, W. N. and Ripley, B. D. (1999) *Modern Applied Statistics with S-PLUS.* Third Edition. Springer.
r None
`sammon` Sammon's Non-Linear Mapping
-------------------------------------
### Description
One form of non-metric multidimensional scaling.
### Usage
```
sammon(d, y = cmdscale(d, k), k = 2, niter = 100, trace = TRUE,
magic = 0.2, tol = 1e-4)
```
### Arguments
| | |
| --- | --- |
| `d` | distance structure of the form returned by `dist`, or a full, symmetric matrix. Data are assumed to be dissimilarities or relative distances, but must be positive except for self-distance. This can contain missing values. |
| `y` | An initial configuration. If none is supplied, `cmdscale` is used to provide the classical solution. (If there are missing values in `d`, an initial configuration must be provided.) This must not have duplicates. |
| `k` | The dimension of the configuration. |
| `niter` | The maximum number of iterations. |
| `trace` | Logical for tracing optimization. Default `TRUE`. |
| `magic` | initial value of the step size constant in diagonal Newton method. |
| `tol` | Tolerance for stopping, in units of stress. |
### Details
This chooses a two-dimensional configuration to minimize the stress, the sum of squared differences between the input distances and those of the configuration, weighted by the distances, the whole sum being divided by the sum of input distances to make the stress scale-free.
An iterative algorithm is used, which will usually converge in around 50 iterations. As this is necessarily an *O(n^2)* calculation, it is slow for large datasets. Further, since the configuration is only determined up to rotations and reflections (by convention the centroid is at the origin), the result can vary considerably from machine to machine. In this release the algorithm has been modified by adding a step-length search (`magic`) to ensure that it always goes downhill.
### Value
Two components:
| | |
| --- | --- |
| `points` | A two-column vector of the fitted configuration. |
| `stress` | The final stress achieved. |
### Side Effects
If trace is true, the initial stress and the current stress are printed out every 10 iterations.
### References
Sammon, J. W. (1969) A non-linear mapping for data structure analysis. *IEEE Trans. Comput.*, **C-18** 401–409.
Ripley, B. D. (1996) *Pattern Recognition and Neural Networks*. Cambridge University Press.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`[cmdscale](../../stats/html/cmdscale)`, `[isoMDS](isomds)`
### Examples
```
swiss.x <- as.matrix(swiss[, -1])
swiss.sam <- sammon(dist(swiss.x))
plot(swiss.sam$points, type = "n")
text(swiss.sam$points, labels = as.character(1:nrow(swiss.x)))
```
r None
`mca` Multiple Correspondence Analysis
---------------------------------------
### Description
Computes a multiple correspondence analysis of a set of factors.
### Usage
```
mca(df, nf = 2, abbrev = FALSE)
```
### Arguments
| | |
| --- | --- |
| `df` | A data frame containing only factors |
| `nf` | The number of dimensions for the MCA. Rarely 3 might be useful. |
| `abbrev` | Should the vertex names be abbreviated? By default these are of the form ‘factor.level’ but if `abbrev = TRUE` they are just ‘level’ which will suffice if the factors have distinct levels. |
### Value
An object of class `"mca"`, with components
| | |
| --- | --- |
| `rs` | The coordinates of the rows, in `nf` dimensions. |
| `cs` | The coordinates of the column vertices, one for each level of each factor. |
| `fs` | Weights for each row, used to interpolate additional factors in `predict.mca`. |
| `p` | The number of factors |
| `d` | The singular values for the `nf` dimensions. |
| `call` | The matched call. |
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<predict.mca>`, `<plot.mca>`, `<corresp>`
### Examples
```
farms.mca <- mca(farms, abbrev=TRUE)
farms.mca
plot(farms.mca)
```
r None
`michelson` Michelson's Speed of Light Data
--------------------------------------------
### Description
Measurements of the speed of light in air, made between 5th June and 2nd July, 1879. The data consists of five experiments, each consisting of 20 consecutive runs. The response is the speed of light in km/s, less 299000. The currently accepted value, on this scale of measurement, is 734.5.
### Usage
```
michelson
```
### Format
The data frame contains the following components:
`Expt`
The experiment number, from 1 to 5.
`Run`
The run number within each experiment.
`Speed`
Speed-of-light measurement.
### Source
A.J. Weekes (1986) *A Genstat Primer.* Edward Arnold.
S. M. Stigler (1977) Do robust estimators work with real data? *Annals of Statistics* **5**, 1055–1098.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
r None
`npr1` US Naval Petroleum Reserve No. 1 data
---------------------------------------------
### Description
Data on the locations, porosity and permeability (a measure of oil flow) on 104 oil wells in the US Naval Petroleum Reserve No. 1 in California.
### Usage
```
npr1
```
### Format
This data frame contains the following columns:
`x`
x coordinates, in miles (origin unspecified)..
`y`
y coordinates, in miles.
`perm`
permeability in milli-Darcies.
`por`
porosity (%).
### Source
Maher, J.C., Carter, R.D. and Lantz, R.J. (1975) Petroleum geology of Naval Petroleum Reserve No. 1, Elk Hills, Kern County, California. *USGS Professional Paper* **912**.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
r None
`summary.loglm` Summary Method Function for Objects of Class 'loglm'
---------------------------------------------------------------------
### Description
Returns a summary list for log-linear models fitted by iterative proportional scaling using `loglm`.
### Usage
```
## S3 method for class 'loglm'
summary(object, fitted = FALSE, ...)
```
### Arguments
| | |
| --- | --- |
| `object` | a fitted loglm model object. |
| `fitted` | if `TRUE` return observed and expected frequencies in the result. Using `fitted = TRUE` may necessitate re-fitting the object. |
| `...` | arguments to be passed to or from other methods. |
### Details
This function is a method for the generic function `summary()` for class `"loglm"`. It can be invoked by calling `summary(x)` for an object `x` of the appropriate class, or directly by calling `summary.loglm(x)` regardless of the class of the object.
### Value
a list is returned for use by `print.summary.loglm`. This has components
| | |
| --- | --- |
| `formula` | the formula used to produce `object` |
| `tests` | the table of test statistics (likelihood ratio, Pearson) for the fit. |
| `oe` | if `fitted = TRUE`, an array of the observed and expected frequencies, otherwise `NULL`. |
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<loglm>`, `[summary](../../base/html/summary)`
r None
`predict.qda` Classify from Quadratic Discriminant Analysis
------------------------------------------------------------
### Description
Classify multivariate observations in conjunction with `qda`
### Usage
```
## S3 method for class 'qda'
predict(object, newdata, prior = object$prior,
method = c("plug-in", "predictive", "debiased", "looCV"), ...)
```
### Arguments
| | |
| --- | --- |
| `object` | object of class `"qda"` |
| `newdata` | data frame of cases to be classified or, if `object` has a formula, a data frame with columns of the same names as the variables used. A vector will be interpreted as a row vector. If newdata is missing, an attempt will be made to retrieve the data used to fit the `qda` object. |
| `prior` | The prior probabilities of the classes, by default the proportions in the training set or what was set in the call to `qda`. |
| `method` | This determines how the parameter estimation is handled. With `"plug-in"` (the default) the usual unbiased parameter estimates are used and assumed to be correct. With `"debiased"` an unbiased estimator of the log posterior probabilities is used, and with `"predictive"` the parameter estimates are integrated out using a vague prior. With `"looCV"` the leave-one-out cross-validation fits to the original dataset are computed and returned. |
| `...` | arguments based from or to other methods |
### Details
This function is a method for the generic function `predict()` for class `"qda"`. It can be invoked by calling `predict(x)` for an object `x` of the appropriate class, or directly by calling `predict.qda(x)` regardless of the class of the object.
Missing values in `newdata` are handled by returning `NA` if the quadratic discriminants cannot be evaluated. If `newdata` is omitted and the `na.action` of the fit omitted cases, these will be omitted on the prediction.
### Value
a list with components
| | |
| --- | --- |
| `class` | The MAP classification (a factor) |
| `posterior` | posterior probabilities for the classes |
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
Ripley, B. D. (1996) *Pattern Recognition and Neural Networks*. Cambridge University Press.
### See Also
`<qda>`, `<lda>`, `<predict.lda>`
### Examples
```
tr <- sample(1:50, 25)
train <- rbind(iris3[tr,,1], iris3[tr,,2], iris3[tr,,3])
test <- rbind(iris3[-tr,,1], iris3[-tr,,2], iris3[-tr,,3])
cl <- factor(c(rep("s",25), rep("c",25), rep("v",25)))
zq <- qda(train, cl)
predict(zq, test)$class
```
r None
`Cars93` Data from 93 Cars on Sale in the USA in 1993
------------------------------------------------------
### Description
The `Cars93` data frame has 93 rows and 27 columns.
### Usage
```
Cars93
```
### Format
This data frame contains the following columns:
`Manufacturer`
Manufacturer.
`Model`
Model.
`Type`
Type: a factor with levels `"Small"`, `"Sporty"`, `"Compact"`, `"Midsize"`, `"Large"` and `"Van"`.
`Min.Price`
Minimum Price (in \$1,000): price for a basic version.
`Price`
Midrange Price (in \$1,000): average of `Min.Price` and `Max.Price`.
`Max.Price`
Maximum Price (in \$1,000): price for “a premium version”.
`MPG.city`
City MPG (miles per US gallon by EPA rating).
`MPG.highway`
Highway MPG.
`AirBags`
Air Bags standard. Factor: none, driver only, or driver & passenger.
`DriveTrain`
Drive train type: rear wheel, front wheel or 4WD; (factor).
`Cylinders`
Number of cylinders (missing for Mazda RX-7, which has a rotary engine).
`EngineSize`
Engine size (litres).
`Horsepower`
Horsepower (maximum).
`RPM`
RPM (revs per minute at maximum horsepower).
`Rev.per.mile`
Engine revolutions per mile (in highest gear).
`Man.trans.avail`
Is a manual transmission version available? (yes or no, Factor).
`Fuel.tank.capacity`
Fuel tank capacity (US gallons).
`Passengers`
Passenger capacity (persons)
`Length`
Length (inches).
`Wheelbase`
Wheelbase (inches).
`Width`
Width (inches).
`Turn.circle`
U-turn space (feet).
`Rear.seat.room`
Rear seat room (inches) (missing for 2-seater vehicles).
`Luggage.room`
Luggage capacity (cubic feet) (missing for vans).
`Weight`
Weight (pounds).
`Origin`
Of non-USA or USA company origins? (factor).
`Make`
Combination of Manufacturer and Model (character).
### Details
Cars were selected at random from among 1993 passenger car models that were listed in both the *Consumer Reports* issue and the *PACE Buying Guide*. Pickup trucks and Sport/Utility vehicles were eliminated due to incomplete information in the *Consumer Reports* source. Duplicate models (e.g., Dodge Shadow and Plymouth Sundance) were listed at most once.
Further description can be found in Lock (1993).
### Source
Lock, R. H. (1993) 1993 New Car Data. *Journal of Statistics Education* **1**(1). doi: [10.1080/10691898.1993.11910459](https://doi.org/10.1080/10691898.1993.11910459)
### References
Venables, W. N. and Ripley, B. D. (1999) *Modern Applied Statistics with S-PLUS.* Third Edition. Springer.
r None
`gamma.shape.glm` Estimate the Shape Parameter of the Gamma Distribution in a GLM Fit
--------------------------------------------------------------------------------------
### Description
Find the maximum likelihood estimate of the shape parameter of the gamma distribution after fitting a `Gamma` generalized linear model.
### Usage
```
## S3 method for class 'glm'
gamma.shape(object, it.lim = 10,
eps.max = .Machine$double.eps^0.25, verbose = FALSE, ...)
```
### Arguments
| | |
| --- | --- |
| `object` | Fitted model object from a `Gamma` family or `quasi` family with `variance = "mu^2"`. |
| `it.lim` | Upper limit on the number of iterations. |
| `eps.max` | Maximum discrepancy between approximations for the iteration process to continue. |
| `verbose` | If `TRUE`, causes successive iterations to be printed out. The initial estimate is taken from the deviance. |
| `...` | further arguments passed to or from other methods. |
### Details
A glm fit for a Gamma family correctly calculates the maximum likelihood estimate of the mean parameters but provides only a crude estimate of the dispersion parameter. This function takes the results of the glm fit and solves the maximum likelihood equation for the reciprocal of the dispersion parameter, which is usually called the shape (or exponent) parameter.
### Value
List of two components
| | |
| --- | --- |
| `alpha` | the maximum likelihood estimate |
| `SE` | the approximate standard error, the square-root of the reciprocal of the observed information. |
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<gamma.dispersion>`
### Examples
```
clotting <- data.frame(
u = c(5,10,15,20,30,40,60,80,100),
lot1 = c(118,58,42,35,27,25,21,19,18),
lot2 = c(69,35,26,21,18,16,13,12,12))
clot1 <- glm(lot1 ~ log(u), data = clotting, family = Gamma)
gamma.shape(clot1)
gm <- glm(Days + 0.1 ~ Age*Eth*Sex*Lrn,
quasi(link=log, variance="mu^2"), quine,
start = c(3, rep(0,31)))
gamma.shape(gm, verbose = TRUE)
summary(gm, dispersion = gamma.dispersion(gm)) # better summary
```
r None
`lm.gls` Fit Linear Models by Generalized Least Squares
--------------------------------------------------------
### Description
Fit linear models by Generalized Least Squares
### Usage
```
lm.gls(formula, data, W, subset, na.action, inverse = FALSE,
method = "qr", model = FALSE, x = FALSE, y = FALSE,
contrasts = NULL, ...)
```
### Arguments
| | |
| --- | --- |
| `formula` | a formula expression as for regression models, of the form `response ~ predictors`. See the documentation of `formula` for other details. |
| `data` | an optional data frame, list or environment in which to interpret the variables occurring in `formula`. |
| `W` | a weight matrix. |
| `subset` | expression saying which subset of the rows of the data should be used in the fit. All observations are included by default. |
| `na.action` | a function to filter missing data. |
| `inverse` | logical: if true `W` specifies the inverse of the weight matrix: this is appropriate if a variance matrix is used. |
| `method` | method to be used by `lm.fit`. |
| `model` | should the model frame be returned? |
| `x` | should the design matrix be returned? |
| `y` | should the response be returned? |
| `contrasts` | a list of contrasts to be used for some or all of |
| `...` | additional arguments to `[lm.fit](../../stats/html/lmfit)`. |
### Details
The problem is transformed to uncorrelated form and passed to `[lm.fit](../../stats/html/lmfit)`.
### Value
An object of class `"lm.gls"`, which is similar to an `"lm"` object. There is no `"weights"` component, and only a few `"lm"` methods will work correctly. As from version 7.1-22 the residuals and fitted values refer to the untransformed problem.
### See Also
`[gls](../../nlme/html/gls)`, `[lm](../../stats/html/lm)`, `<lm.ridge>`
r None
`predict.mca` Predict Method for Class 'mca'
---------------------------------------------
### Description
Used to compute coordinates for additional rows or additional factors in a multiple correspondence analysis.
### Usage
```
## S3 method for class 'mca'
predict(object, newdata, type = c("row", "factor"), ...)
```
### Arguments
| | |
| --- | --- |
| `object` | An object of class `"mca"`, usually the result of a call to `mca`. |
| `newdata` | A data frame containing *either* additional rows of the factors used to fit `object` *or* additional factors for the cases used in the original fit. |
| `type` | Are predictions required for further rows or for new factors? |
| `...` | Additional arguments from `predict`: unused. |
### Value
If `type = "row"`, the coordinates for the additional rows.
If `type = "factor"`, the coordinates of the column vertices for the levels of the new factors.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<mca>`, `<plot.mca>`
r None
`rational` Rational Approximation
----------------------------------
### Description
Find rational approximations to the components of a real numeric object using a standard continued fraction method.
### Usage
```
rational(x, cycles = 10, max.denominator = 2000, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | Any object of mode numeric. Missing values are now allowed. |
| `cycles` | The maximum number of steps to be used in the continued fraction approximation process. |
| `max.denominator` | An early termination criterion. If any partial denominator exceeds `max.denominator` the continued fraction stops at that point. |
| `...` | arguments passed to or from other methods. |
### Details
Each component is first expanded in a continued fraction of the form
`x = floor(x) + 1/(p1 + 1/(p2 + ...)))`
where `p1`, `p2`, ... are positive integers, terminating either at `cycles` terms or when a `pj > max.denominator`. The continued fraction is then re-arranged to retrieve the numerator and denominator as integers and the ratio returned as the value.
### Value
A numeric object with the same attributes as `x` but with entries rational approximations to the values. This effectively rounds relative to the size of the object and replaces very small entries by zero.
### See Also
`<fractions>`
### Examples
```
X <- matrix(runif(25), 5, 5)
zapsmall(solve(X, X/5)) # print near-zeroes as zero
rational(solve(X, X/5))
```
| programming_docs |
r None
`huber` Huber M-estimator of Location with MAD Scale
-----------------------------------------------------
### Description
Finds the Huber M-estimator of location with MAD scale.
### Usage
```
huber(y, k = 1.5, tol = 1e-06)
```
### Arguments
| | |
| --- | --- |
| `y` | vector of data values |
| `k` | Winsorizes at `k` standard deviations |
| `tol` | convergence tolerance |
### Value
list of location and scale parameters
| | |
| --- | --- |
| `mu` | location estimate |
| `s` | MAD scale estimate |
### References
Huber, P. J. (1981) *Robust Statistics.* Wiley.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<hubers>`, `[mad](../../stats/html/mad)`
### Examples
```
huber(chem)
```
r None
`npk` Classical N, P, K Factorial Experiment
---------------------------------------------
### Description
A classical N, P, K (nitrogen, phosphate, potassium) factorial experiment on the growth of peas conducted on 6 blocks. Each half of a fractional factorial design confounding the NPK interaction was used on 3 of the plots.
### Usage
```
npk
```
### Format
The `npk` data frame has 24 rows and 5 columns:
`block`
which block (label 1 to 6).
`N`
indicator (0/1) for the application of nitrogen.
`P`
indicator (0/1) for the application of phosphate.
`K`
indicator (0/1) for the application of potassium.
`yield`
Yield of peas, in pounds/plot (the plots were (1/70) acre).
### Note
This dataset is also contained in **R** 3.0.2 and later.
### Source
Imperial College, London, M.Sc. exercise sheet.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### Examples
```
options(contrasts = c("contr.sum", "contr.poly"))
npk.aov <- aov(yield ~ block + N*P*K, npk)
npk.aov
summary(npk.aov)
alias(npk.aov)
coef(npk.aov)
options(contrasts = c("contr.treatment", "contr.poly"))
npk.aov1 <- aov(yield ~ block + N + K, data = npk)
summary.lm(npk.aov1)
se.contrast(npk.aov1, list(N=="0", N=="1"), data = npk)
## IGNORE_RDIFF_BEGIN
model.tables(npk.aov1, type = "means", se = TRUE)
## IGNORE_RDIFF_END
```
r None
`VA` Veteran's Administration Lung Cancer Trial
------------------------------------------------
### Description
Veteran's Administration lung cancer trial from Kalbfleisch & Prentice.
### Usage
```
VA
```
### Format
A data frame with columns:
`stime`
survival or follow-up time in days.
`status`
dead or censored.
`treat`
treatment: standard or test.
`age`
patient's age in years.
`Karn`
Karnofsky score of patient's performance on a scale of 0 to 100.
`diag.time`
times since diagnosis in months at entry to trial.
`cell`
one of four cell types.
`prior`
prior therapy?
### Source
Kalbfleisch, J.D. and Prentice R.L. (1980) *The Statistical Analysis of Failure Time Data.* Wiley.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
r None
`cov.trob` Covariance Estimation for Multivariate t Distribution
-----------------------------------------------------------------
### Description
Estimates a covariance or correlation matrix assuming the data came from a multivariate t distribution: this provides some degree of robustness to outlier without giving a high breakdown point.
### Usage
```
cov.trob(x, wt = rep(1, n), cor = FALSE, center = TRUE, nu = 5,
maxit = 25, tol = 0.01)
```
### Arguments
| | |
| --- | --- |
| `x` | data matrix. Missing values (NAs) are not allowed. |
| `wt` | A vector of weights for each case: these are treated as if the case `i` actually occurred `wt[i]` times. |
| `cor` | Flag to choose between returning the correlation (`cor = TRUE`) or covariance (`cor = FALSE`) matrix. |
| `center` | a logical value or a numeric vector providing the location about which the covariance is to be taken. If `center = FALSE`, no centering is done; if `center = TRUE` the MLE of the location vector is used. |
| `nu` | ‘degrees of freedom’ for the multivariate t distribution. Must exceed 2 (so that the covariance matrix is finite). |
| `maxit` | Maximum number of iterations in fitting. |
| `tol` | Convergence tolerance for fitting. |
### Value
A list with the following components
| | |
| --- | --- |
| `cov` | the fitted covariance matrix. |
| `center` | the estimated or specified location vector. |
| `wt` | the specified weights: only returned if the `wt` argument was given. |
| `n.obs` | the number of cases used in the fitting. |
| `cor` | the fitted correlation matrix: only returned if `cor = TRUE`. |
| `call` | The matched call. |
| `iter` | The number of iterations used. |
### References
J. T. Kent, D. E. Tyler and Y. Vardi (1994) A curious likelihood identity for the multivariate t-distribution. *Communications in Statistics—Simulation and Computation* **23**, 441–453.
Venables, W. N. and Ripley, B. D. (1999) *Modern Applied Statistics with S-PLUS.* Third Edition. Springer.
### See Also
`[cov](../../stats/html/cor)`, `[cov.wt](../../stats/html/cov.wt)`, `[cov.mve](cov.rob)`
### Examples
```
cov.trob(stackloss)
```
r None
`biopsy` Biopsy Data on Breast Cancer Patients
-----------------------------------------------
### Description
This breast cancer database was obtained from the University of Wisconsin Hospitals, Madison from Dr. William H. Wolberg. He assessed biopsies of breast tumours for 699 patients up to 15 July 1992; each of nine attributes has been scored on a scale of 1 to 10, and the outcome is also known. There are 699 rows and 11 columns.
### Usage
```
biopsy
```
### Format
This data frame contains the following columns:
`ID`
sample code number (not unique).
`V1`
clump thickness.
`V2`
uniformity of cell size.
`V3`
uniformity of cell shape.
`V4`
marginal adhesion.
`V5`
single epithelial cell size.
`V6`
bare nuclei (16 values are missing).
`V7`
bland chromatin.
`V8`
normal nucleoli.
`V9`
mitoses.
`class`
`"benign"` or `"malignant"`.
### Source
P. M. Murphy and D. W. Aha (1992). UCI Repository of machine learning databases. [Machine-readable data repository]. Irvine, CA: University of California, Department of Information and Computer Science.
O. L. Mangasarian and W. H. Wolberg (1990) Cancer diagnosis via linear programming. *SIAM News* **23**, pp 1 & 18.
William H. Wolberg and O.L. Mangasarian (1990) Multisurface method of pattern separation for medical diagnosis applied to breast cytology. *Proceedings of the National Academy of Sciences, U.S.A.* **87**, pp. 9193–9196.
O. L. Mangasarian, R. Setiono and W.H. Wolberg (1990) Pattern recognition via linear programming: Theory and application to medical diagnosis. In *Large-scale Numerical Optimization* eds Thomas F. Coleman and Yuying Li, SIAM Publications, Philadelphia, pp 22–30.
K. P. Bennett and O. L. Mangasarian (1992) Robust linear programming discrimination of two linearly inseparable sets. *Optimization Methods and Software* **1**, pp. 23–34 (Gordon & Breach Science Publishers).
### References
Venables, W. N. and Ripley, B. D. (1999) *Modern Applied Statistics with S-PLUS.* Third Edition. Springer.
r None
`epil` Seizure Counts for Epileptics
-------------------------------------
### Description
Thall and Vail (1990) give a data set on two-week seizure counts for 59 epileptics. The number of seizures was recorded for a baseline period of 8 weeks, and then patients were randomly assigned to a treatment group or a control group. Counts were then recorded for four successive two-week periods. The subject's age is the only covariate.
### Usage
```
epil
```
### Format
This data frame has 236 rows and the following 9 columns:
`y`
the count for the 2-week period.
`trt`
treatment, `"placebo"` or `"progabide"`.
`base`
the counts in the baseline 8-week period.
`age`
subject's age, in years.
`V4`
`0/1` indicator variable of period 4.
`subject`
subject number, 1 to 59.
`period`
period, 1 to 4.
`lbase`
log-counts for the baseline period, centred to have zero mean.
`lage`
log-ages, centred to have zero mean.
### Source
Thall, P. F. and Vail, S. C. (1990) Some covariance models for longitudinal count data with over-dispersion. *Biometrics* **46**, 657–671.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth Edition. Springer.
### Examples
```
summary(glm(y ~ lbase*trt + lage + V4, family = poisson,
data = epil), cor = FALSE)
epil2 <- epil[epil$period == 1, ]
epil2["period"] <- rep(0, 59); epil2["y"] <- epil2["base"]
epil["time"] <- 1; epil2["time"] <- 4
epil2 <- rbind(epil, epil2)
epil2$pred <- unclass(epil2$trt) * (epil2$period > 0)
epil2$subject <- factor(epil2$subject)
epil3 <- aggregate(epil2, list(epil2$subject, epil2$period > 0),
function(x) if(is.numeric(x)) sum(x) else x[1])
epil3$pred <- factor(epil3$pred,
labels = c("base", "placebo", "drug"))
contrasts(epil3$pred) <- structure(contr.sdif(3),
dimnames = list(NULL, c("placebo-base", "drug-placebo")))
## IGNORE_RDIFF_BEGIN
summary(glm(y ~ pred + factor(subject) + offset(log(time)),
family = poisson, data = epil3), cor = FALSE)
## IGNORE_RDIFF_END
summary(glmmPQL(y ~ lbase*trt + lage + V4,
random = ~ 1 | subject,
family = poisson, data = epil))
summary(glmmPQL(y ~ pred, random = ~1 | subject,
family = poisson, data = epil3))
```
r None
`galaxies` Velocities for 82 Galaxies
--------------------------------------
### Description
A numeric vector of velocities in km/sec of 82 galaxies from 6 well-separated conic sections of an `unfilled` survey of the Corona Borealis region. Multimodality in such surveys is evidence for voids and superclusters in the far universe.
### Usage
```
galaxies
```
### Note
There is an 83rd measurement of 5607 km/sec in the Postman *et al.* paper which is omitted in Roeder (1990) and from the dataset here.
There is also a typo: this dataset has 78th observation 26690 which should be 26960.
### Source
Roeder, K. (1990) Density estimation with confidence sets exemplified by superclusters and voids in galaxies. *Journal of the American Statistical Association* **85**, 617–624.
Postman, M., Huchra, J. P. and Geller, M. J. (1986) Probes of large-scale structures in the Corona Borealis region. *Astronomical Journal* **92**, 1238–1247.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### Examples
```
gal <- galaxies/1000
c(width.SJ(gal, method = "dpi"), width.SJ(gal))
plot(x = c(0, 40), y = c(0, 0.3), type = "n", bty = "l",
xlab = "velocity of galaxy (1000km/s)", ylab = "density")
rug(gal)
lines(density(gal, width = 3.25, n = 200), lty = 1)
lines(density(gal, width = 2.56, n = 200), lty = 3)
```
r None
`glm.convert` Change a Negative Binomial fit to a GLM fit
----------------------------------------------------------
### Description
This function modifies an output object from `glm.nb()` to one that looks like the output from `glm()` with a negative binomial family. This allows it to be updated keeping the theta parameter fixed.
### Usage
```
glm.convert(object)
```
### Arguments
| | |
| --- | --- |
| `object` | An object of class `"negbin"`, typically the output from `<glm.nb>()`. |
### Details
Convenience function needed to effect some low level changes to the structure of the fitted model object.
### Value
An object of class `"glm"` with negative binomial family. The theta parameter is then fixed at its present estimate.
### See Also
`<glm.nb>`, `<negative.binomial>`, `[glm](../../stats/html/glm)`
### Examples
```
quine.nb1 <- glm.nb(Days ~ Sex/(Age + Eth*Lrn), data = quine)
quine.nbA <- glm.convert(quine.nb1)
quine.nbB <- update(quine.nb1, . ~ . + Sex:Age:Lrn)
anova(quine.nbA, quine.nbB)
```
r None
`immer` Yields from a Barley Field Trial
-----------------------------------------
### Description
The `immer` data frame has 30 rows and 4 columns. Five varieties of barley were grown in six locations in each of 1931 and 1932.
### Usage
```
immer
```
### Format
This data frame contains the following columns:
`Loc`
The location.
`Var`
The variety of barley (`"manchuria"`, `"svansota"`, `"velvet"`, `"trebi"` and `"peatland"`).
`Y1`
Yield in 1931.
`Y2`
Yield in 1932.
### Source
Immer, F.R., Hayes, H.D. and LeRoy Powers (1934) Statistical determination of barley varietal adaptation. *Journal of the American Society for Agronomy* **26**, 403–419.
Fisher, R.A. (1947) *The Design of Experiments.* 4th edition. Edinburgh: Oliver and Boyd.
### References
Venables, W. N. and Ripley, B. D. (1999) *Modern Applied Statistics with S-PLUS.* Third Edition. Springer.
### Examples
```
immer.aov <- aov(cbind(Y1,Y2) ~ Loc + Var, data = immer)
summary(immer.aov)
immer.aov <- aov((Y1+Y2)/2 ~ Var + Loc, data = immer)
summary(immer.aov)
model.tables(immer.aov, type = "means", se = TRUE, cterms = "Var")
```
r None
`stormer` The Stormer Viscometer Data
--------------------------------------
### Description
The stormer viscometer measures the viscosity of a fluid by measuring the time taken for an inner cylinder in the mechanism to perform a fixed number of revolutions in response to an actuating weight. The viscometer is calibrated by measuring the time taken with varying weights while the mechanism is suspended in fluids of accurately known viscosity. The data comes from such a calibration, and theoretical considerations suggest a nonlinear relationship between time, weight and viscosity, of the form `Time = (B1*Viscosity)/(Weight - B2) + E` where `B1` and `B2` are unknown parameters to be estimated, and `E` is error.
### Usage
```
stormer
```
### Format
The data frame contains the following components:
`Viscosity`
viscosity of fluid.
`Wt`
actuating weight.
`Time`
time taken.
### Source
E. J. Williams (1959) *Regression Analysis.* Wiley.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
r None
`Traffic` Effect of Swedish Speed Limits on Accidents
------------------------------------------------------
### Description
An experiment was performed in Sweden in 1961–2 to assess the effect of a speed limit on the motorway accident rate. The experiment was conducted on 92 days in each year, matched so that day `j` in 1962 was comparable to day `j` in 1961. On some days the speed limit was in effect and enforced, while on other days there was no speed limit and cars tended to be driven faster. The speed limit days tended to be in contiguous blocks.
### Usage
```
Traffic
```
### Format
This data frame contains the following columns:
`year`
1961 or 1962.
`day`
of year.
`limit`
was there a speed limit?
`y`
traffic accident count for that day.
### Source
Svensson, A. (1981) On the goodness-of-fit test for the multiplicative Poisson model. *Annals of Statistics,* **9**, 697–704.
### References
Venables, W. N. and Ripley, B. D. (1999) *Modern Applied Statistics with S-PLUS.* Third Edition. Springer.
r None
`Sitka` Growth Curves for Sitka Spruce Trees in 1988
-----------------------------------------------------
### Description
The `Sitka` data frame has 395 rows and 4 columns. It gives repeated measurements on the log-size of 79 Sitka spruce trees, 54 of which were grown in ozone-enriched chambers and 25 were controls. The size was measured five times in 1988, at roughly monthly intervals.
### Usage
```
Sitka
```
### Format
This data frame contains the following columns:
`size`
measured size (height times diameter squared) of tree, on log scale.
`Time`
time of measurement in days since 1 January 1988.
`tree`
number of tree.
`treat`
either `"ozone"` for an ozone-enriched chamber or `"control"`.
### Source
P. J. Diggle, K.-Y. Liang and S. L. Zeger (1994) *Analysis of Longitudinal Data.* Clarendon Press, Oxford
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`[Sitka89](sitka89)`.
r None
`renumerate` Convert a Formula Transformed by 'denumerate'
-----------------------------------------------------------
### Description
`<denumerate>` converts a formula written using the conventions of `<loglm>` into one that `[terms](../../stats/html/terms)` is able to process. `renumerate` converts it back again to a form like the original.
### Usage
```
renumerate(x)
```
### Arguments
| | |
| --- | --- |
| `x` | A formula, normally as modified by `<denumerate>`. |
### Details
This is an inverse function to `<denumerate>`. It is only needed since `[terms](../../stats/html/terms)` returns an expanded form of the original formula where the non-marginal terms are exposed. This expanded form is mapped back into a form corresponding to the one that the user originally supplied.
### Value
A formula where all variables with names of the form `.vn`, where `n` is an integer, converted to numbers, `n`, as allowed by the formula conventions of `<loglm>`.
### See Also
`<denumerate>`
### Examples
```
denumerate(~(1+2+3)^3 + a/b)
## ~ (.v1 + .v2 + .v3)^3 + a/b
renumerate(.Last.value)
## ~ (1 + 2 + 3)^3 + a/b
```
r None
`ginv` Generalized Inverse of a Matrix
---------------------------------------
### Description
Calculates the Moore-Penrose generalized inverse of a matrix `X`.
### Usage
```
ginv(X, tol = sqrt(.Machine$double.eps))
```
### Arguments
| | |
| --- | --- |
| `X` | Matrix for which the Moore-Penrose inverse is required. |
| `tol` | A relative tolerance to detect zero singular values. |
### Value
A MP generalized inverse matrix for `X`.
### References
Venables, W. N. and Ripley, B. D. (1999) *Modern Applied Statistics with S-PLUS.* Third Edition. Springer. p.100.
### See Also
`[solve](../../matrix/html/solve-methods)`, `[svd](../../base/html/svd)`, `[eigen](../../base/html/eigen)`
r None
`OME` Tests of Auditory Perception in Children with OME
--------------------------------------------------------
### Description
Experiments were performed on children on their ability to differentiate a signal in broad-band noise. The noise was played from a pair of speakers and a signal was added to just one channel; the subject had to turn his/her head to the channel with the added signal. The signal was either coherent (the amplitude of the noise was increased for a period) or incoherent (independent noise was added for the same period to form the same increase in power).
The threshold used in the original analysis was the stimulus loudness needs to get 75% correct responses. Some of the children had suffered from otitis media with effusion (OME).
### Usage
```
OME
```
### Format
The `OME` data frame has 1129 rows and 7 columns:
`ID`
Subject ID (1 to 99, with some IDs missing). A few subjects were measured at different ages.
`OME`
`"low"` or `"high"` or `"N/A"` (at ages other than 30 and 60 months).
`Age`
Age of the subject (months).
`Loud`
Loudness of stimulus, in decibels.
`Noise`
Whether the signal in the stimulus was `"coherent"` or `"incoherent"`.
`Correct`
Number of correct responses from `Trials` trials.
`Trials`
Number of trials performed.
### Background
The experiment was to study otitis media with effusion (OME), a very common childhood condition where the middle ear space, which is normally air-filled, becomes congested by a fluid. There is a concomitant fluctuating, conductive hearing loss which can result in various language, cognitive and social deficits. The term ‘binaural hearing’ is used to describe the listening conditions in which the brain is processing information from both ears at the same time. The brain computes differences in the intensity and/or timing of signals arriving at each ear which contributes to sound localisation and also to our ability to hear in background noise.
Some years ago, it was found that children of 7–8 years with a history of significant OME had significantly worse binaural hearing than children without such a history, despite having equivalent sensitivity. The question remained as to whether it was the timing, the duration, or the degree of severity of the otitis media episodes during critical periods, which affected later binaural hearing. In an attempt to begin to answer this question, 95 children were monitored for the presence of effusion every month since birth. On the basis of OME experience in their first two years, the test population was split into one group of high OME prevalence and one of low prevalence.
### Source
Sarah Hogan, Dept of Physiology, University of Oxford, via Dept of Statistics Consulting Service
### Examples
```
# Fit logistic curve from p = 0.5 to p = 1.0
fp1 <- deriv(~ 0.5 + 0.5/(1 + exp(-(x-L75)/scal)),
c("L75", "scal"),
function(x,L75,scal)NULL)
nls(Correct/Trials ~ fp1(Loud, L75, scal), data = OME,
start = c(L75=45, scal=3))
nls(Correct/Trials ~ fp1(Loud, L75, scal),
data = OME[OME$Noise == "coherent",],
start=c(L75=45, scal=3))
nls(Correct/Trials ~ fp1(Loud, L75, scal),
data = OME[OME$Noise == "incoherent",],
start = c(L75=45, scal=3))
# individual fits for each experiment
aa <- factor(OME$Age)
ab <- 10*OME$ID + unclass(aa)
ac <- unclass(factor(ab))
OME$UID <- as.vector(ac)
OME$UIDn <- OME$UID + 0.1*(OME$Noise == "incoherent")
rm(aa, ab, ac)
OMEi <- OME
library(nlme)
fp2 <- deriv(~ 0.5 + 0.5/(1 + exp(-(x-L75)/2)),
"L75", function(x,L75) NULL)
dec <- getOption("OutDec")
options(show.error.messages = FALSE, OutDec=".")
OMEi.nls <- nlsList(Correct/Trials ~ fp2(Loud, L75) | UIDn,
data = OMEi, start = list(L75=45), control = list(maxiter=100))
options(show.error.messages = TRUE, OutDec=dec)
tmp <- sapply(OMEi.nls, function(X)
{if(is.null(X)) NA else as.vector(coef(X))})
OMEif <- data.frame(UID = round(as.numeric((names(tmp)))),
Noise = rep(c("coherent", "incoherent"), 110),
L75 = as.vector(tmp), stringsAsFactors = TRUE)
OMEif$Age <- OME$Age[match(OMEif$UID, OME$UID)]
OMEif$OME <- OME$OME[match(OMEif$UID, OME$UID)]
OMEif <- OMEif[OMEif$L75 > 30,]
summary(lm(L75 ~ Noise/Age, data = OMEif, na.action = na.omit))
summary(lm(L75 ~ Noise/(Age + OME), data = OMEif,
subset = (Age >= 30 & Age <= 60),
na.action = na.omit), cor = FALSE)
# Or fit by weighted least squares
fpl75 <- deriv(~ sqrt(n)*(r/n - 0.5 - 0.5/(1 + exp(-(x-L75)/scal))),
c("L75", "scal"),
function(r,n,x,L75,scal) NULL)
nls(0 ~ fpl75(Correct, Trials, Loud, L75, scal),
data = OME[OME$Noise == "coherent",],
start = c(L75=45, scal=3))
nls(0 ~ fpl75(Correct, Trials, Loud, L75, scal),
data = OME[OME$Noise == "incoherent",],
start = c(L75=45, scal=3))
# Test to see if the curves shift with age
fpl75age <- deriv(~sqrt(n)*(r/n - 0.5 - 0.5/(1 +
exp(-(x-L75-slope*age)/scal))),
c("L75", "slope", "scal"),
function(r,n,x,age,L75,slope,scal) NULL)
OME.nls1 <-
nls(0 ~ fpl75age(Correct, Trials, Loud, Age, L75, slope, scal),
data = OME[OME$Noise == "coherent",],
start = c(L75=45, slope=0, scal=2))
sqrt(diag(vcov(OME.nls1)))
OME.nls2 <-
nls(0 ~ fpl75age(Correct, Trials, Loud, Age, L75, slope, scal),
data = OME[OME$Noise == "incoherent",],
start = c(L75=45, slope=0, scal=2))
sqrt(diag(vcov(OME.nls2)))
# Now allow random effects by using NLME
OMEf <- OME[rep(1:nrow(OME), OME$Trials),]
OMEf$Resp <- with(OME, rep(rep(c(1,0), length(Trials)),
t(cbind(Correct, Trials-Correct))))
OMEf <- OMEf[, -match(c("Correct", "Trials"), names(OMEf))]
## Not run: ## these fail in R on most platforms
fp2 <- deriv(~ 0.5 + 0.5/(1 + exp(-(x-L75)/exp(lsc))),
c("L75", "lsc"),
function(x, L75, lsc) NULL)
try(summary(nlme(Resp ~ fp2(Loud, L75, lsc),
fixed = list(L75 ~ Age, lsc ~ 1),
random = L75 + lsc ~ 1 | UID,
data = OMEf[OMEf$Noise == "coherent",], method = "ML",
start = list(fixed=c(L75=c(48.7, -0.03), lsc=0.24)), verbose = TRUE)))
try(summary(nlme(Resp ~ fp2(Loud, L75, lsc),
fixed = list(L75 ~ Age, lsc ~ 1),
random = L75 + lsc ~ 1 | UID,
data = OMEf[OMEf$Noise == "incoherent",], method = "ML",
start = list(fixed=c(L75=c(41.5, -0.1), lsc=0)), verbose = TRUE)))
## End(Not run)
```
| programming_docs |
r None
`width.SJ` Bandwidth Selection by Pilot Estimation of Derivatives
------------------------------------------------------------------
### Description
Uses the method of Sheather & Jones (1991) to select the bandwidth of a Gaussian kernel density estimator.
### Usage
```
width.SJ(x, nb = 1000, lower, upper, method = c("ste", "dpi"))
```
### Arguments
| | |
| --- | --- |
| `x` | a numeric vector |
| `nb` | number of bins to use. |
| `upper, lower` | range over which to search for solution if `method = "ste"`. |
| `method` | Either `"ste"` ("solve-the-equation") or `"dpi"` ("direct plug-in"). |
### Value
a bandwidth.
### Note
A faster version for large `n` (thousands) is available in **R** *>=* 3.4.0 as part of `[bw.SJ](../../stats/html/bandwidth)`: quadruple its value for comparability with this version.
### References
Sheather, S. J. and Jones, M. C. (1991) A reliable data-based bandwidth selection method for kernel density estimation. *Journal of the Royal Statistical Society series B* **53**, 683–690.
Scott, D. W. (1992) *Multivariate Density Estimation: Theory, Practice, and Visualization.* Wiley.
Wand, M. P. and Jones, M. C. (1995) *Kernel Smoothing.* Chapman & Hall.
### See Also
`<ucv>`, `<bcv>`, `[density](../../stats/html/density)`
### Examples
```
width.SJ(geyser$duration, method = "dpi")
width.SJ(geyser$duration)
width.SJ(galaxies, method = "dpi")
width.SJ(galaxies)
```
r None
`write.matrix` Write a Matrix or Data Frame
--------------------------------------------
### Description
Writes a matrix or data frame to a file or the console, using column labels and a layout respecting columns.
### Usage
```
write.matrix(x, file = "", sep = " ", blocksize)
```
### Arguments
| | |
| --- | --- |
| `x` | matrix or data frame. |
| `file` | name of output file. The default (`""`) is the console. |
| `sep` | The separator between columns. |
| `blocksize` | If supplied and positive, the output is written in blocks of `blocksize` rows. Choose as large as possible consistent with the amount of memory available. |
### Details
If `x` is a matrix, supplying `blocksize` is more memory-efficient and enables larger matrices to be written, but each block of rows might be formatted slightly differently.
If `x` is a data frame, the conversion to a matrix may negate the memory saving.
### Side Effects
A formatted file is produced, with column headings (if `x` has them) and columns of data.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`[write.table](../../utils/html/write.table)`
r None
`shoes` Shoe wear data of Box, Hunter and Hunter
-------------------------------------------------
### Description
A list of two vectors, giving the wear of shoes of materials A and B for one foot each of ten boys.
### Usage
```
shoes
```
### Source
G. E. P. Box, W. G. Hunter and J. S. Hunter (1978) *Statistics for Experimenters.* Wiley, p. 100
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
r None
`con2tr` Convert Lists to Data Frames for use by lattice
---------------------------------------------------------
### Description
Convert lists to data frames for use by lattice.
### Usage
```
con2tr(obj)
```
### Arguments
| | |
| --- | --- |
| `obj` | A list of components `x`, `y` and `z` as passed to `contour`. |
### Details
`con2tr` repeats the `x` and `y` components suitably to match the vector `z`.
### Value
A data frame suitable for passing to lattice (formerly trellis) functions.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
r None
`eagles` Foraging Ecology of Bald Eagles
-----------------------------------------
### Description
Knight and Skagen collected during a field study on the foraging behaviour of wintering Bald Eagles in Washington State, USA data concerning 160 attempts by one (pirating) Bald Eagle to steal a chum salmon from another (feeding) Bald Eagle.
### Usage
```
eagles
```
### Format
The `eagles` data frame has 8 rows and 5 columns.
`y`
Number of successful attempts.
`n`
Total number of attempts.
`P`
Size of pirating eagle (`L` = large, `S` = small).
`A`
Age of pirating eagle (`I` = immature, `A` = adult).
`V`
Size of victim eagle (`L` = large, `S` = small).
### Source
Knight, R. L. and Skagen, S. K. (1988) Agonistic asymmetries and the foraging ecology of Bald Eagles. *Ecology* **69**, 1188–1194.
### References
Venables, W. N. and Ripley, B. D. (1999) *Modern Applied Statistics with S-PLUS.* Third Edition. Springer.
### Examples
```
eagles.glm <- glm(cbind(y, n - y) ~ P*A + V, data = eagles,
family = binomial)
dropterm(eagles.glm)
prof <- profile(eagles.glm)
plot(prof)
pairs(prof)
```
r None
`crabs` Morphological Measurements on Leptograpsus Crabs
---------------------------------------------------------
### Description
The `crabs` data frame has 200 rows and 8 columns, describing 5 morphological measurements on 50 crabs each of two colour forms and both sexes, of the species *Leptograpsus variegatus* collected at Fremantle, W. Australia.
### Usage
```
crabs
```
### Format
This data frame contains the following columns:
`sp`
`species` - `"B"` or `"O"` for blue or orange.
`sex`
as it says.
`index`
index `1:50` within each of the four groups.
`FL`
frontal lobe size (mm).
`RW`
rear width (mm).
`CL`
carapace length (mm).
`CW`
carapace width (mm).
`BD`
body depth (mm).
### Source
Campbell, N.A. and Mahon, R.J. (1974) A multivariate study of variation in two species of rock crab of genus *Leptograpsus.* *Australian Journal of Zoology* **22**, 417–425.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
r None
`predict.lqs` Predict from an lqs Fit
--------------------------------------
### Description
Predict from an resistant regression fitted by `lqs`.
### Usage
```
## S3 method for class 'lqs'
predict(object, newdata, na.action = na.pass, ...)
```
### Arguments
| | |
| --- | --- |
| `object` | object inheriting from class `"lqs"` |
| `newdata` | matrix or data frame of cases to be predicted or, if `object` has a formula, a data frame with columns of the same names as the variables used. A vector will be interpreted as a row vector. If `newdata` is missing, an attempt will be made to retrieve the data used to fit the `lqs` object. |
| `na.action` | function determining what should be done with missing values in `newdata`. The default is to predict `NA`. |
| `...` | arguments to be passed from or to other methods. |
### Details
This function is a method for the generic function `predict()` for class `lqs`. It can be invoked by calling `predict(x)` for an object `x` of the appropriate class, or directly by calling `predict.lqs(x)` regardless of the class of the object.
Missing values in `newdata` are handled by returning `NA` if the linear fit cannot be evaluated. If `newdata` is omitted and the `na.action` of the fit omitted cases, these will be omitted on the prediction.
### Value
A vector of predictions.
### Author(s)
B.D. Ripley
### See Also
`<lqs>`
### Examples
```
set.seed(123)
fm <- lqs(stack.loss ~ ., data = stackloss, method = "S", nsamp = "exact")
predict(fm, stackloss)
```
r None
`steam` The Saturated Steam Pressure Data
------------------------------------------
### Description
Temperature and pressure in a saturated steam driven experimental device.
### Usage
```
steam
```
### Format
The data frame contains the following components:
`Temp`
temperature, in degrees Celsius.
`Press`
pressure, in Pascals.
### Source
N.R. Draper and H. Smith (1981) *Applied Regression Analysis.* Wiley, pp. 518–9.
### References
Venables, W. N. and Ripley, B. D. (1999) *Modern Applied Statistics with S-PLUS.* Third Edition. Springer.
r None
`chem` Copper in Wholemeal Flour
---------------------------------
### Description
A numeric vector of 24 determinations of copper in wholemeal flour, in parts per million.
### Usage
```
chem
```
### Source
Analytical Methods Committee (1989) Robust statistics – how not to reject outliers. *The Analyst* **114**, 1693–1702.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
r None
`dose.p` Predict Doses for Binomial Assay model
------------------------------------------------
### Description
Calibrate binomial assays, generalizing the calculation of LD50.
### Usage
```
dose.p(obj, cf = 1:2, p = 0.5)
```
### Arguments
| | |
| --- | --- |
| `obj` | A fitted model object of class inheriting from `"glm"`. |
| `cf` | The terms in the coefficient vector giving the intercept and coefficient of (log-)dose |
| `p` | Probabilities at which to predict the dose needed. |
### Value
An object of class `"glm.dose"` giving the prediction (attribute `"p"` and standard error (attribute `"SE"`) at each response probability.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Springer.
### Examples
```
ldose <- rep(0:5, 2)
numdead <- c(1, 4, 9, 13, 18, 20, 0, 2, 6, 10, 12, 16)
sex <- factor(rep(c("M", "F"), c(6, 6)))
SF <- cbind(numdead, numalive = 20 - numdead)
budworm.lg0 <- glm(SF ~ sex + ldose - 1, family = binomial)
dose.p(budworm.lg0, cf = c(1,3), p = 1:3/4)
dose.p(update(budworm.lg0, family = binomial(link=probit)),
cf = c(1,3), p = 1:3/4)
```
r None
`summary.rlm` Summary Method for Robust Linear Models
------------------------------------------------------
### Description
`summary` method for objects of class `"rlm"`
### Usage
```
## S3 method for class 'rlm'
summary(object, method = c("XtX", "XtWX"), correlation = FALSE, ...)
```
### Arguments
| | |
| --- | --- |
| `object` | the fitted model. This is assumed to be the result of some fit that produces an object inheriting from the class `rlm`, in the sense that the components returned by the `rlm` function will be available. |
| `method` | Should the weighted (by the IWLS weights) or unweighted cross-products matrix be used? |
| `correlation` | logical. Should correlations be computed (and printed)? |
| `...` | arguments passed to or from other methods. |
### Details
This function is a method for the generic function `summary()` for class `"rlm"`. It can be invoked by calling `summary(x)` for an object `x` of the appropriate class, or directly by calling `summary.rlm(x)` regardless of the class of the object.
### Value
If printing takes place, only a null value is returned. Otherwise, a list is returned with the following components. Printing always takes place if this function is invoked automatically as a method for the `summary` function.
| | |
| --- | --- |
| `correlation` | The computed correlation coefficient matrix for the coefficients in the model. |
| `cov.unscaled` | The unscaled covariance matrix; i.e, a matrix such that multiplying it by an estimate of the error variance produces an estimated covariance matrix for the coefficients. |
| `sigma` | The scale estimate. |
| `stddev` | A scale estimate used for the standard errors. |
| `df` | The number of degrees of freedom for the model and for residuals. |
| `coefficients` | A matrix with three columns, containing the coefficients, their standard errors and the corresponding t statistic. |
| `terms` | The terms object used in fitting this model. |
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`[summary](../../base/html/summary)`
### Examples
```
summary(rlm(calls ~ year, data = phones, maxit = 50))
```
r None
`stepAIC` Choose a model by AIC in a Stepwise Algorithm
--------------------------------------------------------
### Description
Performs stepwise model selection by AIC.
### Usage
```
stepAIC(object, scope, scale = 0,
direction = c("both", "backward", "forward"),
trace = 1, keep = NULL, steps = 1000, use.start = FALSE,
k = 2, ...)
```
### Arguments
| | |
| --- | --- |
| `object` | an object representing a model of an appropriate class. This is used as the initial model in the stepwise search. |
| `scope` | defines the range of models examined in the stepwise search. This should be either a single formula, or a list containing components `upper` and `lower`, both formulae. See the details for how to specify the formulae and how they are used. |
| `scale` | used in the definition of the AIC statistic for selecting the models, currently only for `[lm](../../stats/html/lm)` and `[aov](../../stats/html/aov)` models (see `[extractAIC](../../stats/html/extractaic)` for details). |
| `direction` | the mode of stepwise search, can be one of `"both"`, `"backward"`, or `"forward"`, with a default of `"both"`. If the `scope` argument is missing the default for `direction` is `"backward"`. |
| `trace` | if positive, information is printed during the running of `stepAIC`. Larger values may give more information on the fitting process. |
| `keep` | a filter function whose input is a fitted model object and the associated `AIC` statistic, and whose output is arbitrary. Typically `keep` will select a subset of the components of the object and return them. The default is not to keep anything. |
| `steps` | the maximum number of steps to be considered. The default is 1000 (essentially as many as required). It is typically used to stop the process early. |
| `use.start` | if true the updated fits are done starting at the linear predictor for the currently selected model. This may speed up the iterative calculations for `glm` (and other fits), but it can also slow them down. **Not used** in **R**. |
| `k` | the multiple of the number of degrees of freedom used for the penalty. Only `k = 2` gives the genuine AIC: `k = log(n)` is sometimes referred to as BIC or SBC. |
| `...` | any additional arguments to `extractAIC`. (None are currently used.) |
### Details
The set of models searched is determined by the `scope` argument. The right-hand-side of its `lower` component is always included in the model, and right-hand-side of the model is included in the `upper` component. If `scope` is a single formula, it specifies the `upper` component, and the `lower` model is empty. If `scope` is missing, the initial model is used as the `upper` model.
Models specified by `scope` can be templates to update `object` as used by `[update.formula](../../stats/html/update.formula)`.
There is a potential problem in using `[glm](../../stats/html/glm)` fits with a variable `scale`, as in that case the deviance is not simply related to the maximized log-likelihood. The `glm` method for `[extractAIC](../../stats/html/extractaic)` makes the appropriate adjustment for a `gaussian` family, but may need to be amended for other cases. (The `binomial` and `poisson` families have fixed `scale` by default and do not correspond to a particular maximum-likelihood problem for variable `scale`.)
Where a conventional deviance exists (e.g. for `lm`, `aov` and `glm` fits) this is quoted in the analysis of variance table: it is the *unscaled* deviance.
### Value
the stepwise-selected model is returned, with up to two additional components. There is an `"anova"` component corresponding to the steps taken in the search, as well as a `"keep"` component if the `keep=` argument was supplied in the call. The `"Resid. Dev"` column of the analysis of deviance table refers to a constant minus twice the maximized log likelihood: it will be a deviance only in cases where a saturated model is well-defined (thus excluding `lm`, `aov` and `survreg` fits, for example).
### Note
The model fitting must apply the models to the same dataset. This may be a problem if there are missing values and an `na.action` other than `na.fail` is used (as is the default in **R**). We suggest you remove the missing values first.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<addterm>`, `<dropterm>`, `[step](../../stats/html/step)`
### Examples
```
quine.hi <- aov(log(Days + 2.5) ~ .^4, quine)
quine.nxt <- update(quine.hi, . ~ . - Eth:Sex:Age:Lrn)
quine.stp <- stepAIC(quine.nxt,
scope = list(upper = ~Eth*Sex*Age*Lrn, lower = ~1),
trace = FALSE)
quine.stp$anova
cpus1 <- cpus
for(v in names(cpus)[2:7])
cpus1[[v]] <- cut(cpus[[v]], unique(quantile(cpus[[v]])),
include.lowest = TRUE)
cpus0 <- cpus1[, 2:8] # excludes names, authors' predictions
cpus.samp <- sample(1:209, 100)
cpus.lm <- lm(log10(perf) ~ ., data = cpus1[cpus.samp,2:8])
cpus.lm2 <- stepAIC(cpus.lm, trace = FALSE)
cpus.lm2$anova
example(birthwt)
birthwt.glm <- glm(low ~ ., family = binomial, data = bwt)
birthwt.step <- stepAIC(birthwt.glm, trace = FALSE)
birthwt.step$anova
birthwt.step2 <- stepAIC(birthwt.glm, ~ .^2 + I(scale(age)^2)
+ I(scale(lwt)^2), trace = FALSE)
birthwt.step2$anova
quine.nb <- glm.nb(Days ~ .^4, data = quine)
quine.nb2 <- stepAIC(quine.nb)
quine.nb2$anova
```
r None
`farms` Ecological Factors in Farm Management
----------------------------------------------
### Description
The `farms` data frame has 20 rows and 4 columns. The rows are farms on the Dutch island of Terschelling and the columns are factors describing the management of grassland.
### Usage
```
farms
```
### Format
This data frame contains the following columns:
`Mois`
Five levels of soil moisture – level 3 does not occur at these 20 farms.
`Manag`
Grassland management type (`SF` = standard, `BF` = biological, `HF` = hobby farming, `NM` = nature conservation).
`Use`
Grassland use (`U1` = hay production, `U2` = intermediate, `U3` = grazing).
`Manure`
Manure usage – classes `C0` to `C4`.
### Source
J.C. Gower and D.J. Hand (1996) *Biplots*. Chapman & Hall, Table 4.6.
Quoted as from:
R.H.G. Jongman, C.J.F. ter Braak and O.F.R. van Tongeren (1987) *Data Analysis in Community and Landscape Ecology.* PUDOC, Wageningen.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### Examples
```
farms.mca <- mca(farms, abbrev = TRUE) # Use levels as names
eqscplot(farms.mca$cs, type = "n")
text(farms.mca$rs, cex = 0.7)
text(farms.mca$cs, labels = dimnames(farms.mca$cs)[[1]], cex = 0.7)
```
r None
`denumerate` Transform an Allowable Formula for 'loglm' into one for 'terms'
-----------------------------------------------------------------------------
### Description
`<loglm>` allows dimension numbers to be used in place of names in the formula. `denumerate` modifies such a formula into one that `[terms](../../stats/html/terms)` can process.
### Usage
```
denumerate(x)
```
### Arguments
| | |
| --- | --- |
| `x` | A formula conforming to the conventions of `<loglm>`, that is, it may allow dimension numbers to stand in for names when specifying a log-linear model. |
### Details
The model fitting function `<loglm>` fits log-linear models to frequency data using iterative proportional scaling. To specify the model the user must nominate the margins in the data that remain fixed under the log-linear model. It is convenient to allow the user to use dimension numbers, 1, 2, 3, ... for the first, second, third, ..., margins in a similar way to variable names. As the model formula has to be parsed by `[terms](../../stats/html/terms)`, which treats `1` in a special way and requires parseable variable names, these formulae have to be modified by giving genuine names for these margin, or dimension numbers. `denumerate` replaces these numbers with names of a special form, namely `n` is replaced by `.vn`. This allows `terms` to parse the formula in the usual way.
### Value
A linear model formula like that presented, except that where dimension numbers, say `n`, have been used to specify fixed margins these are replaced by names of the form `.vn` which may be processed by `terms`.
### See Also
`<renumerate>`
### Examples
```
denumerate(~(1+2+3)^3 + a/b)
## which gives ~ (.v1 + .v2 + .v3)^3 + a/b
```
| programming_docs |
r None
`beav2` Body Temperature Series of Beaver 2
--------------------------------------------
### Description
Reynolds (1994) describes a small part of a study of the long-term temperature dynamics of beaver *Castor canadensis* in north-central Wisconsin. Body temperature was measured by telemetry every 10 minutes for four females, but data from a one period of less than a day for each of two animals is used there.
### Usage
```
beav2
```
### Format
The `beav2` data frame has 100 rows and 4 columns. This data frame contains the following columns:
`day`
Day of observation (in days since the beginning of 1990), November 3–4.
`time`
Time of observation, in the form `0330` for 3.30am.
`temp`
Measured body temperature in degrees Celsius.
`activ`
Indicator of activity outside the retreat.
### Source
P. S. Reynolds (1994) Time-series analyses of beaver body temperatures. Chapter 11 of Lange, N., Ryan, L., Billard, L., Brillinger, D., Conquest, L. and Greenhouse, J. eds (1994) *Case Studies in Biometry.* New York: John Wiley and Sons.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<beav1>`
### Examples
```
attach(beav2)
beav2$hours <- 24*(day-307) + trunc(time/100) + (time%%100)/60
plot(beav2$hours, beav2$temp, type = "l", xlab = "time",
ylab = "temperature", main = "Beaver 2")
usr <- par("usr"); usr[3:4] <- c(-0.2, 8); par(usr = usr)
lines(beav2$hours, beav2$activ, type = "s", lty = 2)
temp <- ts(temp, start = 8+2/3, frequency = 6)
activ <- ts(activ, start = 8+2/3, frequency = 6)
acf(temp[activ == 0]); acf(temp[activ == 1]) # also look at PACFs
ar(temp[activ == 0]); ar(temp[activ == 1])
arima(temp, order = c(1,0,0), xreg = activ)
dreg <- cbind(sin = sin(2*pi*beav2$hours/24), cos = cos(2*pi*beav2$hours/24))
arima(temp, order = c(1,0,0), xreg = cbind(active=activ, dreg))
## IGNORE_RDIFF_BEGIN
library(nlme) # for gls and corAR1
beav2.gls <- gls(temp ~ activ, data = beav2, corr = corAR1(0.8),
method = "ML")
summary(beav2.gls)
summary(update(beav2.gls, subset = 6:100))
detach("beav2"); rm(temp, activ)
## IGNORE_RDIFF_END
```
r None
`wtloss` Weight Loss Data from an Obese Patient
------------------------------------------------
### Description
The data frame gives the weight, in kilograms, of an obese patient at 52 time points over an 8 month period of a weight rehabilitation programme.
### Usage
```
wtloss
```
### Format
This data frame contains the following columns:
`Days`
time in days since the start of the programme.
`Weight`
weight in kilograms of the patient.
### Source
Dr T. Davies, Adelaide.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### Examples
```
## IGNORE_RDIFF_BEGIN
wtloss.fm <- nls(Weight ~ b0 + b1*2^(-Days/th),
data = wtloss, start = list(b0=90, b1=95, th=120))
wtloss.fm
## IGNORE_RDIFF_END
plot(wtloss)
with(wtloss, lines(Days, fitted(wtloss.fm)))
```
r None
`cov.rob` Resistant Estimation of Multivariate Location and Scatter
--------------------------------------------------------------------
### Description
Compute a multivariate location and scale estimate with a high breakdown point – this can be thought of as estimating the mean and covariance of the `good` part of the data. `cov.mve` and `cov.mcd` are compatibility wrappers.
### Usage
```
cov.rob(x, cor = FALSE, quantile.used = floor((n + p + 1)/2),
method = c("mve", "mcd", "classical"),
nsamp = "best", seed)
cov.mve(...)
cov.mcd(...)
```
### Arguments
| | |
| --- | --- |
| `x` | a matrix or data frame. |
| `cor` | should the returned result include a correlation matrix? |
| `quantile.used` | the minimum number of the data points regarded as `good` points. |
| `method` | the method to be used – minimum volume ellipsoid, minimum covariance determinant or classical product-moment. Using `cov.mve` or `cov.mcd` forces `mve` or `mcd` respectively. |
| `nsamp` | the number of samples or `"best"` or `"exact"` or `"sample"`. The limit If `"sample"` the number chosen is `min(5*p, 3000)`, taken from Rousseeuw and Hubert (1997). If `"best"` exhaustive enumeration is done up to 5000 samples: if `"exact"` exhaustive enumeration will be attempted. |
| `seed` | the seed to be used for random sampling: see `[RNGkind](../../base/html/random)`. The current value of `.Random.seed` will be preserved if it is set. |
| `...` | arguments to `cov.rob` other than `method`. |
### Details
For method `"mve"`, an approximate search is made of a subset of size `quantile.used` with an enclosing ellipsoid of smallest volume; in method `"mcd"` it is the volume of the Gaussian confidence ellipsoid, equivalently the determinant of the classical covariance matrix, that is minimized. The mean of the subset provides a first estimate of the location, and the rescaled covariance matrix a first estimate of scatter. The Mahalanobis distances of all the points from the location estimate for this covariance matrix are calculated, and those points within the 97.5% point under Gaussian assumptions are declared to be `good`. The final estimates are the mean and rescaled covariance of the `good` points.
The rescaling is by the appropriate percentile under Gaussian data; in addition the first covariance matrix has an *ad hoc* finite-sample correction given by Marazzi.
For method `"mve"` the search is made over ellipsoids determined by the covariance matrix of `p` of the data points. For method `"mcd"` an additional improvement step suggested by Rousseeuw and van Driessen (1999) is used, in which once a subset of size `quantile.used` is selected, an ellipsoid based on its covariance is tested (as this will have no larger a determinant, and may be smaller).
There is a hard limit on the allowed number of samples, *2^31 - 1*. However, practical limits are likely to be much lower and one might check the number of samples used for exhaustive enumeration, `combn(NROW(x), NCOL(x) + 1)`, before attempting it.
### Value
A list with components
| | |
| --- | --- |
| `center` | the final estimate of location. |
| `cov` | the final estimate of scatter. |
| `cor` | (only is `cor = TRUE`) the estimate of the correlation matrix. |
| `sing` | message giving number of singular samples out of total |
| `crit` | the value of the criterion on log scale. For MCD this is the determinant, and for MVE it is proportional to the volume. |
| `best` | the subset used. For MVE the best sample, for MCD the best set of size `quantile.used`. |
| `n.obs` | total number of observations. |
### References
P. J. Rousseeuw and A. M. Leroy (1987) *Robust Regression and Outlier Detection.* Wiley.
A. Marazzi (1993) *Algorithms, Routines and S Functions for Robust Statistics.* Wadsworth and Brooks/Cole.
P. J. Rousseeuw and B. C. van Zomeren (1990) Unmasking multivariate outliers and leverage points, *Journal of the American Statistical Association*, **85**, 633–639.
P. J. Rousseeuw and K. van Driessen (1999) A fast algorithm for the minimum covariance determinant estimator. *Technometrics* **41**, 212–223.
P. Rousseeuw and M. Hubert (1997) Recent developments in PROGRESS. In *L1-Statistical Procedures and Related Topics* ed Y. Dodge, IMS Lecture Notes volume **31**, pp. 201–214.
### See Also
`<lqs>`
### Examples
```
set.seed(123)
cov.rob(stackloss)
cov.rob(stack.x, method = "mcd", nsamp = "exact")
```
r None
`forbes` Forbes' Data on Boiling Points in the Alps
----------------------------------------------------
### Description
A data frame with 17 observations on boiling point of water and barometric pressure in inches of mercury.
### Usage
```
forbes
```
### Format
`bp`
boiling point (degrees Farenheit).
`pres`
barometric pressure in inches of mercury.
### Source
A. C. Atkinson (1985) *Plots, Transformations and Regression.* Oxford.
S. Weisberg (1980) *Applied Linear Regression.* Wiley.
r None
`waders` Counts of Waders at 15 Sites in South Africa
------------------------------------------------------
### Description
The `waders` data frame has 15 rows and 19 columns. The entries are counts of waders in summer.
### Usage
```
waders
```
### Format
This data frame contains the following columns (species)
`S1`
Oystercatcher
`S2`
White-fronted Plover
`S3`
Kitt Lutz's Plover
`S4`
Three-banded Plover
`S5`
Grey Plover
`S6`
Ringed Plover
`S7`
Bar-tailed Godwit
`S8`
Whimbrel
`S9`
Marsh Sandpiper
`S10`
Greenshank
`S11`
Common Sandpiper
`S12`
Turnstone
`S13`
Knot
`S14`
Sanderling
`S15`
Little Stint
`S16`
Curlew Sandpiper
`S17`
Ruff
`S18`
Avocet
`S19`
Black-winged Stilt
The rows are the sites:
A = Namibia North coast
B = Namibia North wetland
C = Namibia South coast
D = Namibia South wetland
E = Cape North coast
F = Cape North wetland
G = Cape West coast
H = Cape West wetland
I = Cape South coast
J= Cape South wetland
K = Cape East coast
L = Cape East wetland
M = Transkei coast
N = Natal coast
O = Natal wetland
### Source
J.C. Gower and D.J. Hand (1996) *Biplots* Chapman & Hall Table 9.1. Quoted as from:
R.W. Summers, L.G. Underhill, D.J. Pearson and D.A. Scott (1987) Wader migration systems in south and eastern Africa and western Asia. *Wader Study Group Bulletin* **49** Supplement, 15–34.
### Examples
```
plot(corresp(waders, nf=2))
```
r None
`Melanoma` Survival from Malignant Melanoma
--------------------------------------------
### Description
The `Melanoma` data frame has data on 205 patients in Denmark with malignant melanoma.
### Usage
```
Melanoma
```
### Format
This data frame contains the following columns:
`time`
survival time in days, possibly censored.
`status`
`1` died from melanoma, `2` alive, `3` dead from other causes.
`sex`
`1` = male, `0` = female.
`age`
age in years.
`year`
of operation.
`thickness`
tumour thickness in mm.
`ulcer`
`1` = presence, `0` = absence.
### Source
P. K. Andersen, O. Borgan, R. D. Gill and N. Keiding (1993) *Statistical Models based on Counting Processes.* Springer.
r None
`UScereal` Nutritional and Marketing Information on US Cereals
---------------------------------------------------------------
### Description
The `UScereal` data frame has 65 rows and 11 columns. The data come from the 1993 ASA Statistical Graphics Exposition, and are taken from the mandatory F&DA food label. The data have been normalized here to a portion of one American cup.
### Usage
```
UScereal
```
### Format
This data frame contains the following columns:
`mfr`
Manufacturer, represented by its first initial: G=General Mills, K=Kelloggs, N=Nabisco, P=Post, Q=Quaker Oats, R=Ralston Purina.
`calories`
number of calories in one portion.
`protein`
grams of protein in one portion.
`fat`
grams of fat in one portion.
`sodium`
milligrams of sodium in one portion.
`fibre`
grams of dietary fibre in one portion.
`carbo`
grams of complex carbohydrates in one portion.
`sugars`
grams of sugars in one portion.
`shelf`
display shelf (1, 2, or 3, counting from the floor).
`potassium`
grams of potassium.
`vitamins`
vitamins and minerals (none, enriched, or 100%).
### Source
The original data are available at <http://lib.stat.cmu.edu/datasets/1993.expo/>.
### References
Venables, W. N. and Ripley, B. D. (1999) *Modern Applied Statistics with S-PLUS.* Third Edition. Springer.
r None
`theta.md` Estimate theta of the Negative Binomial
---------------------------------------------------
### Description
Given the estimated mean vector, estimate `theta` of the Negative Binomial Distribution.
### Usage
```
theta.md(y, mu, dfr, weights, limit = 20, eps = .Machine$double.eps^0.25)
theta.ml(y, mu, n, weights, limit = 10, eps = .Machine$double.eps^0.25,
trace = FALSE)
theta.mm(y, mu, dfr, weights, limit = 10, eps = .Machine$double.eps^0.25)
```
### Arguments
| | |
| --- | --- |
| `y` | Vector of observed values from the Negative Binomial. |
| `mu` | Estimated mean vector. |
| `n` | Number of data points (defaults to the sum of `weights`) |
| `dfr` | Residual degrees of freedom (assuming `theta` known). For a weighted fit this is the sum of the weights minus the number of fitted parameters. |
| `weights` | Case weights. If missing, taken as 1. |
| `limit` | Limit on the number of iterations. |
| `eps` | Tolerance to determine convergence. |
| `trace` | logical: should iteration progress be printed? |
### Details
`theta.md` estimates by equating the deviance to the residual degrees of freedom, an analogue of a moment estimator.
`theta.ml` uses maximum likelihood.
`theta.mm` calculates the moment estimator of `theta` by equating the Pearson chi-square *sum((y-mu)^2/(mu+mu^2/theta))* to the residual degrees of freedom.
### Value
The required estimate of `theta`, as a scalar. For `theta.ml`, the standard error is given as attribute `"SE"`.
### See Also
`<glm.nb>`
### Examples
```
quine.nb <- glm.nb(Days ~ .^2, data = quine)
theta.md(quine$Days, fitted(quine.nb), dfr = df.residual(quine.nb))
theta.ml(quine$Days, fitted(quine.nb))
theta.mm(quine$Days, fitted(quine.nb), dfr = df.residual(quine.nb))
## weighted example
yeast <- data.frame(cbind(numbers = 0:5, fr = c(213, 128, 37, 18, 3, 1)))
fit <- glm.nb(numbers ~ 1, weights = fr, data = yeast)
summary(fit)
mu <- fitted(fit)
theta.md(yeast$numbers, mu, dfr = 399, weights = yeast$fr)
theta.ml(yeast$numbers, mu, limit = 15, weights = yeast$fr)
theta.mm(yeast$numbers, mu, dfr = 399, weights = yeast$fr)
```
r None
`Pima.tr` Diabetes in Pima Indian Women
----------------------------------------
### Description
A population of women who were at least 21 years old, of Pima Indian heritage and living near Phoenix, Arizona, was tested for diabetes according to World Health Organization criteria. The data were collected by the US National Institute of Diabetes and Digestive and Kidney Diseases. We used the 532 complete records after dropping the (mainly missing) data on serum insulin.
### Usage
```
Pima.tr
Pima.tr2
Pima.te
```
### Format
These data frames contains the following columns:
`npreg`
number of pregnancies.
`glu`
plasma glucose concentration in an oral glucose tolerance test.
`bp`
diastolic blood pressure (mm Hg).
`skin`
triceps skin fold thickness (mm).
`bmi`
body mass index (weight in kg/(height in m)*\^2*).
`ped`
diabetes pedigree function.
`age`
age in years.
`type`
`Yes` or `No`, for diabetic according to WHO criteria.
### Details
The training set `Pima.tr` contains a randomly selected set of 200 subjects, and `Pima.te` contains the remaining 332 subjects. `Pima.tr2` contains `Pima.tr` plus 100 subjects with missing values in the explanatory variables.
### Source
Smith, J. W., Everhart, J. E., Dickson, W. C., Knowler, W. C. and Johannes, R. S. (1988) Using the ADAP learning algorithm to forecast the onset of *diabetes mellitus*. In *Proceedings of the Symposium on Computer Applications in Medical Care (Washington, 1988),* ed. R. A. Greenes, pp. 261–265. Los Alamitos, CA: IEEE Computer Society Press.
Ripley, B.D. (1996) *Pattern Recognition and Neural Networks.* Cambridge: Cambridge University Press.
r None
`plot.profile` Plotting Functions for 'profile' Objects
--------------------------------------------------------
### Description
`[plot](../../graphics/html/plot.default)` and `[pairs](../../graphics/html/pairs)` methods for objects of class `"profile"`.
### Usage
```
## S3 method for class 'profile'
plot(x, ...)
## S3 method for class 'profile'
pairs(x, colours = 2:3, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | an object inheriting from class `"profile"`. |
| `colours` | Colours to be used for the mean curves conditional on `x` and `y` respectively. |
| `...` | arguments passed to or from other methods. |
### Details
This is the main `plot` method for objects created by `<profile.glm>`. It can also be called on objects created by `[profile.nls](../../stats/html/profile.nls)`, but they have a specific method, `[plot.profile.nls](../../stats/html/plot.profile.nls)`.
The `pairs` method shows, for each pair of parameters x and y, two curves intersecting at the maximum likelihood estimate, which give the loci of the points at which the tangents to the contours of the bivariate profile likelihood become vertical and horizontal, respectively. In the case of an exactly bivariate normal profile likelihood, these two curves would be straight lines giving the conditional means of y|x and x|y, and the contours would be exactly elliptical.
### Author(s)
Originally, D. M. Bates and W. N. Venables. (For S in 1996.)
### See Also
`<profile.glm>`, `[profile.nls](../../stats/html/profile.nls)`.
### Examples
```
## see ?profile.glm for an example using glm fits.
## a version of example(profile.nls) from R >= 2.8.0
fm1 <- nls(demand ~ SSasympOrig(Time, A, lrc), data = BOD)
pr1 <- profile(fm1, alpha = 0.1)
MASS:::plot.profile(pr1)
pairs(pr1) # a little odd since the parameters are highly correlated
## an example from ?nls
x <- -(1:100)/10
y <- 100 + 10 * exp(x / 2) + rnorm(x)/10
nlmod <- nls(y ~ Const + A * exp(B * x), start=list(Const=100, A=10, B=1))
pairs(profile(nlmod))
```
r None
`muscle` Effect of Calcium Chloride on Muscle Contraction in Rat Hearts
------------------------------------------------------------------------
### Description
The purpose of this experiment was to assess the influence of calcium in solution on the contraction of heart muscle in rats. The left auricle of 21 rat hearts was isolated and on several occasions a constant-length strip of tissue was electrically stimulated and dipped into various concentrations of calcium chloride solution, after which the shortening of the strip was accurately measured as the response.
### Usage
```
muscle
```
### Format
This data frame contains the following columns:
`Strip`
which heart muscle strip was used?
`Conc`
concentration of calcium chloride solution, in multiples of 2.2 mM.
`Length`
the change in length (shortening) of the strip, (allegedly) in mm.
### Source
Linder, A., Chakravarti, I. M. and Vuagnat, P. (1964) Fitting asymptotic regression curves with different asymptotes. In *Contributions to Statistics. Presented to Professor P. C. Mahalanobis on the occasion of his 70th birthday*, ed. C. R. Rao, pp. 221–228. Oxford: Pergamon Press.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth Edition. Springer.
### Examples
```
## IGNORE_RDIFF_BEGIN
A <- model.matrix(~ Strip - 1, data=muscle)
rats.nls1 <- nls(log(Length) ~ cbind(A, rho^Conc),
data = muscle, start = c(rho=0.1), algorithm="plinear")
(B <- coef(rats.nls1))
st <- list(alpha = B[2:22], beta = B[23], rho = B[1])
(rats.nls2 <- nls(log(Length) ~ alpha[Strip] + beta*rho^Conc,
data = muscle, start = st))
## IGNORE_RDIFF_END
Muscle <- with(muscle, {
Muscle <- expand.grid(Conc = sort(unique(Conc)), Strip = levels(Strip))
Muscle$Yhat <- predict(rats.nls2, Muscle)
Muscle <- cbind(Muscle, logLength = rep(as.numeric(NA), 126))
ind <- match(paste(Strip, Conc),
paste(Muscle$Strip, Muscle$Conc))
Muscle$logLength[ind] <- log(Length)
Muscle})
lattice::xyplot(Yhat ~ Conc | Strip, Muscle, as.table = TRUE,
ylim = range(c(Muscle$Yhat, Muscle$logLength), na.rm = TRUE),
subscripts = TRUE, xlab = "Calcium Chloride concentration (mM)",
ylab = "log(Length in mm)", panel =
function(x, y, subscripts, ...) {
panel.xyplot(x, Muscle$logLength[subscripts], ...)
llines(spline(x, y))
})
```
| programming_docs |
r None
`bcv` Biased Cross-Validation for Bandwidth Selection
------------------------------------------------------
### Description
Uses biased cross-validation to select the bandwidth of a Gaussian kernel density estimator.
### Usage
```
bcv(x, nb = 1000, lower, upper)
```
### Arguments
| | |
| --- | --- |
| `x` | a numeric vector |
| `nb` | number of bins to use. |
| `lower, upper` | Range over which to minimize. The default is almost always satisfactory. |
### Value
a bandwidth
### References
Scott, D. W. (1992) *Multivariate Density Estimation: Theory, Practice, and Visualization.* Wiley.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<ucv>`, `[width.SJ](width.sj)`, `[density](../../stats/html/density)`
### Examples
```
bcv(geyser$duration)
```
r None
`loglm` Fit Log-Linear Models by Iterative Proportional Scaling
----------------------------------------------------------------
### Description
This function provides a front-end to the standard function, `loglin`, to allow log-linear models to be specified and fitted in a manner similar to that of other fitting functions, such as `glm`.
### Usage
```
loglm(formula, data, subset, na.action, ...)
```
### Arguments
| | |
| --- | --- |
| `formula` | A linear model formula specifying the log-linear model. If the left-hand side is empty, the `data` argument is required and must be a (complete) array of frequencies. In this case the variables on the right-hand side may be the names of the `dimnames` attribute of the frequency array, or may be the positive integers: 1, 2, 3, ... used as alternative names for the 1st, 2nd, 3rd, ... dimension (classifying factor). If the left-hand side is not empty it specifies a vector of frequencies. In this case the data argument, if present, must be a data frame from which the left-hand side vector and the classifying factors on the right-hand side are (preferentially) obtained. The usual abbreviation of a `.` to stand for ‘all other variables in the data frame’ is allowed. Any non-factors on the right-hand side of the formula are coerced to factor. |
| `data` | Numeric array or data frame (or list or environment). In the first case it specifies the array of frequencies; in the second it provides the data frame from which the variables occurring in the formula are preferentially obtained in the usual way. This argument may be the result of a call to `[xtabs](../../stats/html/xtabs)`. |
| `subset` | Specifies a subset of the rows in the data frame to be used. The default is to take all rows. |
| `na.action` | Specifies a method for handling missing observations. The default is to fail if missing values are present. |
| `...` | May supply other arguments to the function `<loglm1>`. |
### Details
If the left-hand side of the formula is empty the `data` argument supplies the frequency array and the right-hand side of the formula is used to construct the list of fixed faces as required by `loglin`. Structural zeros may be specified by giving a `start` argument with those entries set to zero, as described in the help information for `loglin`.
If the left-hand side is not empty, all variables on the right-hand side are regarded as classifying factors and an array of frequencies is constructed. If some cells in the complete array are not specified they are treated as structural zeros. The right-hand side of the formula is again used to construct the list of faces on which the observed and fitted totals must agree, as required by `loglin`. Hence terms such as `a:b`, `a*b` and `a/b` are all equivalent.
### Value
An object of class `"loglm"` conveying the results of the fitted log-linear model. Methods exist for the generic functions `print`, `summary`, `deviance`, `fitted`, `coef`, `resid`, `anova` and `update`, which perform the expected tasks. Only log-likelihood ratio tests are allowed using `anova`.
The deviance is simply an alternative name for the log-likelihood ratio statistic for testing the current model within a saturated model, in accordance with standard usage in generalized linear models.
### Warning
If structural zeros are present, the calculation of degrees of freedom may not be correct. `loglin` itself takes no action to allow for structural zeros. `loglm` deducts one degree of freedom for each structural zero, but cannot make allowance for gains in error degrees of freedom due to loss of dimension in the model space. (This would require checking the rank of the model matrix, but since iterative proportional scaling methods are developed largely to avoid constructing the model matrix explicitly, the computation is at least difficult.)
When structural zeros (or zero fitted values) are present the estimated coefficients will not be available due to infinite estimates. The deviances will normally continue to be correct, though.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<loglm1>`, `[loglin](../../stats/html/loglin)`
### Examples
```
# The data frames Cars93, minn38 and quine are available
# in the MASS package.
# Case 1: frequencies specified as an array.
sapply(minn38, function(x) length(levels(x)))
## hs phs fol sex f
## 3 4 7 2 0
##minn38a <- array(0, c(3,4,7,2), lapply(minn38[, -5], levels))
##minn38a[data.matrix(minn38[,-5])] <- minn38$f
## or more simply
minn38a <- xtabs(f ~ ., minn38)
fm <- loglm(~ 1 + 2 + 3 + 4, minn38a) # numerals as names.
deviance(fm)
## [1] 3711.9
fm1 <- update(fm, .~.^2)
fm2 <- update(fm, .~.^3, print = TRUE)
## 5 iterations: deviation 0.075
anova(fm, fm1, fm2)
# Case 1. An array generated with xtabs.
loglm(~ Type + Origin, xtabs(~ Type + Origin, Cars93))
# Case 2. Frequencies given as a vector in a data frame
names(quine)
## [1] "Eth" "Sex" "Age" "Lrn" "Days"
fm <- loglm(Days ~ .^2, quine)
gm <- glm(Days ~ .^2, poisson, quine) # check glm.
c(deviance(fm), deviance(gm)) # deviances agree
## [1] 1368.7 1368.7
c(fm$df, gm$df) # resid df do not!
c(fm$df, gm$df.residual) # resid df do not!
## [1] 127 128
# The loglm residual degrees of freedom is wrong because of
# a non-detectable redundancy in the model matrix.
```
r None
`lqs` Resistant Regression
---------------------------
### Description
Fit a regression to the *good* points in the dataset, thereby achieving a regression estimator with a high breakdown point. `lmsreg` and `ltsreg` are compatibility wrappers.
### Usage
```
lqs(x, ...)
## S3 method for class 'formula'
lqs(formula, data, ...,
method = c("lts", "lqs", "lms", "S", "model.frame"),
subset, na.action, model = TRUE,
x.ret = FALSE, y.ret = FALSE, contrasts = NULL)
## Default S3 method:
lqs(x, y, intercept = TRUE, method = c("lts", "lqs", "lms", "S"),
quantile, control = lqs.control(...), k0 = 1.548, seed, ...)
lmsreg(...)
ltsreg(...)
```
### Arguments
| | |
| --- | --- |
| `formula` | a formula of the form `y ~ x1 + x2 + ...`. |
| `data` | an optional data frame, list or environemnt from which variables specified in `formula` are preferentially to be taken. |
| `subset` | an index vector specifying the cases to be used in fitting. (NOTE: If given, this argument must be named exactly.) |
| `na.action` | function to specify the action to be taken if `NA`s are found. The default action is for the procedure to fail. Alternatives include `[na.omit](../../stats/html/na.fail)` and `[na.exclude](../../stats/html/na.fail)`, which lead to omission of cases with missing values on any required variable. (NOTE: If given, this argument must be named exactly.) |
| `model, x.ret, y.ret` | logical. If `TRUE` the model frame, the model matrix and the response are returned, respectively. |
| `contrasts` | an optional list. See the `contrasts.arg` of `[model.matrix.default](../../stats/html/model.matrix)`. |
| `x` | a matrix or data frame containing the explanatory variables. |
| `y` | the response: a vector of length the number of rows of `x`. |
| `intercept` | should the model include an intercept? |
| `method` | the method to be used. `model.frame` returns the model frame: for the others see the `Details` section. Using `lmsreg` or `ltsreg` forces `"lms"` and `"lts"` respectively. |
| `quantile` | the quantile to be used: see `Details`. This is over-ridden if `method = "lms"`. |
| `control` | additional control items: see `Details`. |
| `k0` | the cutoff / tuning constant used for *chi()* and *psi()* functions when `method = "S"`, currently corresponding to Tukey's ‘biweight’. |
| `seed` | the seed to be used for random sampling: see `.Random.seed`. The current value of `.Random.seed` will be preserved if it is set.. |
| `...` | arguments to be passed to `lqs.default` or `lqs.control`, see `control` above and `Details`. |
### Details
Suppose there are `n` data points and `p` regressors, including any intercept.
The first three methods minimize some function of the sorted squared residuals. For methods `"lqs"` and `"lms"` is the `quantile` squared residual, and for `"lts"` it is the sum of the `quantile` smallest squared residuals. `"lqs"` and `"lms"` differ in the defaults for `quantile`, which are `floor((n+p+1)/2)` and `floor((n+1)/2)` respectively. For `"lts"` the default is `floor(n/2) + floor((p+1)/2)`.
The `"S"` estimation method solves for the scale `s` such that the average of a function chi of the residuals divided by `s` is equal to a given constant.
The `control` argument is a list with components
`psamp`:
the size of each sample. Defaults to `p`.
`nsamp`:
the number of samples or `"best"` (the default) or `"exact"` or `"sample"`. If `"sample"` the number chosen is `min(5*p, 3000)`, taken from Rousseeuw and Hubert (1997). If `"best"` exhaustive enumeration is done up to 5000 samples; if `"exact"` exhaustive enumeration will be attempted however many samples are needed.
`adjust`:
should the intercept be optimized for each sample? Defaults to `TRUE`.
### Value
An object of class `"lqs"`. This is a list with components
| | |
| --- | --- |
| `crit` | the value of the criterion for the best solution found, in the case of `method == "S"` before IWLS refinement. |
| `sing` | character. A message about the number of samples which resulted in singular fits. |
| `coefficients` | of the fitted linear model |
| `bestone` | the indices of those points fitted by the best sample found (prior to adjustment of the intercept, if requested). |
| `fitted.values` | the fitted values. |
| `residuals` | the residuals. |
| `scale` | estimate(s) of the scale of the error. The first is based on the fit criterion. The second (not present for `method ==
"S"`) is based on the variance of those residuals whose absolute value is less than 2.5 times the initial estimate. |
### Note
There seems no reason other than historical to use the `lms` and `lqs` options. LMS estimation is of low efficiency (converging at rate *n^{-1/3}*) whereas LTS has the same asymptotic efficiency as an M estimator with trimming at the quartiles (Marazzi, 1993, p.201). LQS and LTS have the same maximal breakdown value of `(floor((n-p)/2) + 1)/n` attained if `floor((n+p)/2) <= quantile <= floor((n+p+1)/2)`. The only drawback mentioned of LTS is greater computation, as a sort was thought to be required (Marazzi, 1993, p.201) but this is not true as a partial sort can be used (and is used in this implementation).
Adjusting the intercept for each trial fit does need the residuals to be sorted, and may be significant extra computation if `n` is large and `p` small.
Opinions differ over the choice of `psamp`. Rousseeuw and Hubert (1997) only consider p; Marazzi (1993) recommends p+1 and suggests that more samples are better than adjustment for a given computational limit.
The computations are exact for a model with just an intercept and adjustment, and for LQS for a model with an intercept plus one regressor and exhaustive search with adjustment. For all other cases the minimization is only known to be approximate.
### References
P. J. Rousseeuw and A. M. Leroy (1987) *Robust Regression and Outlier Detection.* Wiley.
A. Marazzi (1993) *Algorithms, Routines and S Functions for Robust Statistics.* Wadsworth and Brooks/Cole.
P. Rousseeuw and M. Hubert (1997) Recent developments in PROGRESS. In *L1-Statistical Procedures and Related Topics*, ed Y. Dodge, IMS Lecture Notes volume **31**, pp. 201–214.
### See Also
`<predict.lqs>`
### Examples
```
## IGNORE_RDIFF_BEGIN
set.seed(123) # make reproducible
lqs(stack.loss ~ ., data = stackloss)
lqs(stack.loss ~ ., data = stackloss, method = "S", nsamp = "exact")
## IGNORE_RDIFF_END
```
r None
`addterm` Try All One-Term Additions to a Model
------------------------------------------------
### Description
Try fitting all models that differ from the current model by adding a single term from those supplied, maintaining marginality.
This function is generic; there exist methods for classes `lm` and `glm` and the default method will work for many other classes.
### Usage
```
addterm(object, ...)
## Default S3 method:
addterm(object, scope, scale = 0, test = c("none", "Chisq"),
k = 2, sorted = FALSE, trace = FALSE, ...)
## S3 method for class 'lm'
addterm(object, scope, scale = 0, test = c("none", "Chisq", "F"),
k = 2, sorted = FALSE, ...)
## S3 method for class 'glm'
addterm(object, scope, scale = 0, test = c("none", "Chisq", "F"),
k = 2, sorted = FALSE, trace = FALSE, ...)
```
### Arguments
| | |
| --- | --- |
| `object` | An object fitted by some model-fitting function. |
| `scope` | a formula specifying a maximal model which should include the current one. All additional terms in the maximal model with all marginal terms in the original model are tried. |
| `scale` | used in the definition of the AIC statistic for selecting the models, currently only for `lm`, `aov` and `glm` models. Specifying `scale` asserts that the residual standard error or dispersion is known. |
| `test` | should the results include a test statistic relative to the original model? The F test is only appropriate for `lm` and `aov` models, and perhaps for some over-dispersed `glm` models. The Chisq test can be an exact test (`lm` models with known scale) or a likelihood-ratio test depending on the method. |
| `k` | the multiple of the number of degrees of freedom used for the penalty. Only `k=2` gives the genuine AIC: `k = log(n)` is sometimes referred to as BIC or SBC. |
| `sorted` | should the results be sorted on the value of AIC? |
| `trace` | if `TRUE` additional information may be given on the fits as they are tried. |
| `...` | arguments passed to or from other methods. |
### Details
The definition of AIC is only up to an additive constant: when appropriate (`lm` models with specified scale) the constant is taken to be that used in Mallows' Cp statistic and the results are labelled accordingly.
### Value
A table of class `"anova"` containing at least columns for the change in degrees of freedom and AIC (or Cp) for the models. Some methods will give further information, for example sums of squares, deviances, log-likelihoods and test statistics.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<dropterm>`, `[stepAIC](stepaic)`
### Examples
```
quine.hi <- aov(log(Days + 2.5) ~ .^4, quine)
quine.lo <- aov(log(Days+2.5) ~ 1, quine)
addterm(quine.lo, quine.hi, test="F")
house.glm0 <- glm(Freq ~ Infl*Type*Cont + Sat, family=poisson,
data=housing)
addterm(house.glm0, ~. + Sat:(Infl+Type+Cont), test="Chisq")
house.glm1 <- update(house.glm0, . ~ . + Sat*(Infl+Type+Cont))
addterm(house.glm1, ~. + Sat:(Infl+Type+Cont)^2, test = "Chisq")
```
r None
`rlm` Robust Fitting of Linear Models
--------------------------------------
### Description
Fit a linear model by robust regression using an M estimator.
### Usage
```
rlm(x, ...)
## S3 method for class 'formula'
rlm(formula, data, weights, ..., subset, na.action,
method = c("M", "MM", "model.frame"),
wt.method = c("inv.var", "case"),
model = TRUE, x.ret = TRUE, y.ret = FALSE, contrasts = NULL)
## Default S3 method:
rlm(x, y, weights, ..., w = rep(1, nrow(x)),
init = "ls", psi = psi.huber,
scale.est = c("MAD", "Huber", "proposal 2"), k2 = 1.345,
method = c("M", "MM"), wt.method = c("inv.var", "case"),
maxit = 20, acc = 1e-4, test.vec = "resid", lqs.control = NULL)
psi.huber(u, k = 1.345, deriv = 0)
psi.hampel(u, a = 2, b = 4, c = 8, deriv = 0)
psi.bisquare(u, c = 4.685, deriv = 0)
```
### Arguments
| | |
| --- | --- |
| `formula` | a formula of the form `y ~ x1 + x2 + ...`. |
| `data` | an optional data frame, list or environment from which variables specified in `formula` are preferentially to be taken. |
| `weights` | a vector of prior weights for each case. |
| `subset` | An index vector specifying the cases to be used in fitting. |
| `na.action` | A function to specify the action to be taken if `NA`s are found. The ‘factory-fresh’ default action in **R** is `[na.omit](../../stats/html/na.fail)`, and can be changed by `[options](../../base/html/options)(na.action=)`. |
| `x` | a matrix or data frame containing the explanatory variables. |
| `y` | the response: a vector of length the number of rows of `x`. |
| `method` | currently either M-estimation or MM-estimation or (for the `formula` method only) find the model frame. MM-estimation is M-estimation with Tukey's biweight initialized by a specific S-estimator. See the ‘Details’ section. |
| `wt.method` | are the weights case weights (giving the relative importance of case, so a weight of 2 means there are two of these) or the inverse of the variances, so a weight of two means this error is half as variable? |
| `model` | should the model frame be returned in the object? |
| `x.ret` | should the model matrix be returned in the object? |
| `y.ret` | should the response be returned in the object? |
| `contrasts` | optional contrast specifications: see `[lm](../../stats/html/lm)`. |
| `w` | (optional) initial down-weighting for each case. |
| `init` | (optional) initial values for the coefficients OR a method to find initial values OR the result of a fit with a `coef` component. Known methods are `"ls"` (the default) for an initial least-squares fit using weights `w*weights`, and `"lts"` for an unweighted least-trimmed squares fit with 200 samples. |
| `psi` | the psi function is specified by this argument. It must give (possibly by name) a function `g(x, ..., deriv)` that for `deriv=0` returns psi(x)/x and for `deriv=1` returns psi'(x). Tuning constants will be passed in via `...`. |
| `scale.est` | method of scale estimation: re-scaled MAD of the residuals (default) or Huber's proposal 2 (which can be selected by either `"Huber"` or `"proposal 2"`). |
| `k2` | tuning constant used for Huber proposal 2 scale estimation. |
| `maxit` | the limit on the number of IWLS iterations. |
| `acc` | the accuracy for the stopping criterion. |
| `test.vec` | the stopping criterion is based on changes in this vector. |
| `...` | additional arguments to be passed to `rlm.default` or to the `psi` function. |
| `lqs.control` | An optional list of control values for `<lqs>`. |
| `u` | numeric vector of evaluation points. |
| `k, a, b, c` | tuning constants. |
| `deriv` | `0` or `1`: compute values of the psi function or of its first derivative. |
### Details
Fitting is done by iterated re-weighted least squares (IWLS).
Psi functions are supplied for the Huber, Hampel and Tukey bisquare proposals as `psi.huber`, `psi.hampel` and `psi.bisquare`. Huber's corresponds to a convex optimization problem and gives a unique solution (up to collinearity). The other two will have multiple local minima, and a good starting point is desirable.
Selecting `method = "MM"` selects a specific set of options which ensures that the estimator has a high breakdown point. The initial set of coefficients and the final scale are selected by an S-estimator with `k0 = 1.548`; this gives (for *n >> p*) breakdown point 0.5. The final estimator is an M-estimator with Tukey's biweight and fixed scale that will inherit this breakdown point provided `c > k0`; this is true for the default value of `c` that corresponds to 95% relative efficiency at the normal. Case weights are not supported for `method = "MM"`.
### Value
An object of class `"rlm"` inheriting from `"lm"`. Note that the `df.residual` component is deliberately set to `NA` to avoid inappropriate estimation of the residual scale from the residual mean square by `"lm"` methods.
The additional components not in an `lm` object are
| | |
| --- | --- |
| `s` | the robust scale estimate used |
| `w` | the weights used in the IWLS process |
| `psi` | the psi function with parameters substituted |
| `conv` | the convergence criteria at each iteration |
| `converged` | did the IWLS converge? |
| `wresid` | a working residual, weighted for `"inv.var"` weights only. |
### Note
Prior to version `7.3-52`, offset terms in `formula` were omitted from fitted and predicted values.
### References
P. J. Huber (1981) *Robust Statistics*. Wiley.
F. R. Hampel, E. M. Ronchetti, P. J. Rousseeuw and W. A. Stahel (1986) *Robust Statistics: The Approach based on Influence Functions*. Wiley.
A. Marazzi (1993) *Algorithms, Routines and S Functions for Robust Statistics*. Wadsworth & Brooks/Cole.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`[lm](../../stats/html/lm)`, `<lqs>`.
### Examples
```
summary(rlm(stack.loss ~ ., stackloss))
rlm(stack.loss ~ ., stackloss, psi = psi.hampel, init = "lts")
rlm(stack.loss ~ ., stackloss, psi = psi.bisquare)
```
| programming_docs |
r None
`genotype` Rat Genotype Data
-----------------------------
### Description
Data from a foster feeding experiment with rat mothers and litters of four different genotypes: `A`, `B`, `I` and `J`. Rat litters were separated from their natural mothers at birth and given to foster mothers to rear.
### Usage
```
genotype
```
### Format
The data frame has the following components:
`Litter`
genotype of the litter.
`Mother`
genotype of the foster mother.
`Wt`
Litter average weight gain of the litter, in grams at age 28 days. (The source states that the within-litter variability is negligible.)
### Source
Scheffe, H. (1959) *The Analysis of Variance* Wiley p. 140.
Bailey, D. W. (1953) *The Inheritance of Maternal Influences on the Growth of the Rat.* Unpublished Ph.D. thesis, University of California. Table B of the Appendix.
### References
Venables, W. N. and Ripley, B. D. (1999) *Modern Applied Statistics with S-PLUS.* Third Edition. Springer.
r None
`fitdistr` Maximum-likelihood Fitting of Univariate Distributions
------------------------------------------------------------------
### Description
Maximum-likelihood fitting of univariate distributions, allowing parameters to be held fixed if desired.
### Usage
```
fitdistr(x, densfun, start, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | A numeric vector of length at least one containing only [finite](../../base/html/is.finite) values. |
| `densfun` | Either a character string or a function returning a density evaluated at its first argument. Distributions `"beta"`, `"cauchy"`, `"chi-squared"`, `"exponential"`, `"gamma"`, `"geometric"`, `"log-normal"`, `"lognormal"`, `"logistic"`, `"negative binomial"`, `"normal"`, `"Poisson"`, `"t"` and `"weibull"` are recognised, case being ignored. |
| `start` | A named list giving the parameters to be optimized with initial values. This can be omitted for some of the named distributions and must be for others (see Details). |
| `...` | Additional parameters, either for `densfun` or for `optim`. In particular, it can be used to specify bounds via `lower` or `upper` or both. If arguments of `densfun` (or the density function corresponding to a character-string specification) are included they will be held fixed. |
### Details
For the Normal, log-Normal, geometric, exponential and Poisson distributions the closed-form MLEs (and exact standard errors) are used, and `start` should not be supplied.
For all other distributions, direct optimization of the log-likelihood is performed using `[optim](../../stats/html/optim)`. The estimated standard errors are taken from the observed information matrix, calculated by a numerical approximation. For one-dimensional problems the Nelder-Mead method is used and for multi-dimensional problems the BFGS method, unless arguments named `lower` or `upper` are supplied (when `L-BFGS-B` is used) or `method` is supplied explicitly.
For the `"t"` named distribution the density is taken to be the location-scale family with location `m` and scale `s`.
For the following named distributions, reasonable starting values will be computed if `start` is omitted or only partially specified: `"cauchy"`, `"gamma"`, `"logistic"`, `"negative binomial"` (parametrized by `mu` and `size`), `"t"` and `"weibull"`. Note that these starting values may not be good enough if the fit is poor: in particular they are not resistant to outliers unless the fitted distribution is long-tailed.
There are `[print](../../base/html/print)`, `[coef](../../stats/html/coef)`, `[vcov](../../stats/html/vcov)` and `[logLik](../../stats/html/loglik)` methods for class `"fitdistr"`.
### Value
An object of class `"fitdistr"`, a list with four components,
| | |
| --- | --- |
| `estimate` | the parameter estimates, |
| `sd` | the estimated standard errors, |
| `vcov` | the estimated variance-covariance matrix, and |
| `loglik` | the log-likelihood. |
### Note
Numerical optimization cannot work miracles: please note the comments in `[optim](../../stats/html/optim)` on scaling data. If the fitted parameters are far away from one, consider re-fitting specifying the control parameter `parscale`.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### Examples
```
## avoid spurious accuracy
op <- options(digits = 3)
set.seed(123)
x <- rgamma(100, shape = 5, rate = 0.1)
fitdistr(x, "gamma")
## now do this directly with more control.
fitdistr(x, dgamma, list(shape = 1, rate = 0.1), lower = 0.001)
set.seed(123)
x2 <- rt(250, df = 9)
fitdistr(x2, "t", df = 9)
## allow df to vary: not a very good idea!
fitdistr(x2, "t")
## now do fixed-df fit directly with more control.
mydt <- function(x, m, s, df) dt((x-m)/s, df)/s
fitdistr(x2, mydt, list(m = 0, s = 1), df = 9, lower = c(-Inf, 0))
set.seed(123)
x3 <- rweibull(100, shape = 4, scale = 100)
fitdistr(x3, "weibull")
set.seed(123)
x4 <- rnegbin(500, mu = 5, theta = 4)
fitdistr(x4, "Negative Binomial")
options(op)
```
r None
`shuttle` Space Shuttle Autolander Problem
-------------------------------------------
### Description
The `shuttle` data frame has 256 rows and 7 columns. The first six columns are categorical variables giving example conditions; the seventh is the decision. The first 253 rows are the training set, the last 3 the test conditions.
### Usage
```
shuttle
```
### Format
This data frame contains the following factor columns:
`stability`
stable positioning or not (`stab` / `xstab`).
`error`
size of error (`MM` / `SS` / `LX` / `XL`).
`sign`
sign of error, positive or negative (`pp` / `nn`).
`wind`
wind sign (`head` / `tail`).
`magn`
wind strength (`Light` / `Medium` / `Strong` / `Out of Range`).
`vis`
visibility (`yes` / `no`).
`use`
use the autolander or not. (`auto` / `noauto`.)
### Source
D. Michie (1989) Problems of computer-aided concept formation. In *Applications of Expert Systems 2*, ed. J. R. Quinlan, Turing Institute Press / Addison-Wesley, pp. 310–333.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
r None
`hist.scott` Plot a Histogram with Automatic Bin Width Selection
-----------------------------------------------------------------
### Description
Plot a histogram with automatic bin width selection, using the Scott or Freedman–Diaconis formulae.
### Usage
```
hist.scott(x, prob = TRUE, xlab = deparse(substitute(x)), ...)
hist.FD(x, prob = TRUE, xlab = deparse(substitute(x)), ...)
```
### Arguments
| | |
| --- | --- |
| `x` | A data vector |
| `prob` | Should the plot have unit area, so be a density estimate? |
| `xlab, ...` | Further arguments to `hist`. |
### Value
For the `nclass.*` functions, the suggested number of classes.
### Side Effects
Plot a histogram.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Springer.
### See Also
`[hist](../../graphics/html/hist)`
r None
`mona` MONothetic Analysis Clustering of Binary Variables
----------------------------------------------------------
### Description
Returns a list representing a divisive hierarchical clustering of a dataset with binary variables only.
### Usage
```
mona(x, trace.lev = 0)
```
### Arguments
| | |
| --- | --- |
| `x` | data matrix or data frame in which each row corresponds to an observation, and each column corresponds to a variable. All variables must be binary. A limited number of missing values (`NA`s) is allowed. Every observation must have at least one value different from `[NA](../../base/html/na)`. No variable should have half of its values missing. There must be at least one variable which has no missing values. A variable with all its non-missing values identical is not allowed. |
| `trace.lev` | logical or integer indicating if (and how much) the algorithm should produce progress output. |
### Details
`mona` is fully described in chapter 7 of Kaufman and Rousseeuw (1990). It is “monothetic” in the sense that each division is based on a single (well-chosen) variable, whereas most other hierarchical methods (including `agnes` and `diana`) are “polythetic”, i.e. they use all variables together.
The `mona`-algorithm constructs a hierarchy of clusterings, starting with one large cluster. Clusters are divided until all observations in the same cluster have identical values for all variables.
At each stage, all clusters are divided according to the values of one variable. A cluster is divided into one cluster with all observations having value 1 for that variable, and another cluster with all observations having value 0 for that variable.
The variable used for splitting a cluster is the variable with the maximal total association to the other variables, according to the observations in the cluster to be splitted. The association between variables f and g is given by a(f,g)\*d(f,g) - b(f,g)\*c(f,g), where a(f,g), b(f,g), c(f,g), and d(f,g) are the numbers in the contingency table of f and g. [That is, a(f,g) (resp. d(f,g)) is the number of observations for which f and g both have value 0 (resp. value 1); b(f,g) (resp. c(f,g)) is the number of observations for which f has value 0 (resp. 1) and g has value 1 (resp. 0).] The total association of a variable f is the sum of its associations to all variables.
### Value
an object of class `"mona"` representing the clustering. See `<mona.object>` for details.
### Missing Values (`[NA](../../base/html/na)`s)
The mona-algorithm requires “pure” 0-1 values. However, `mona(x)` allows `x` to contain (not too many) `[NA](../../base/html/na)`s. In a preliminary step, these are “imputed”, i.e., all missing values are filled in. To do this, the same measure of association between variables is used as in the algorithm. When variable f has missing values, the variable g with the largest absolute association to f is looked up. When the association between f and g is positive, any missing value of f is replaced by the value of g for the same observation. If the association between f and g is negative, then any missing value of f is replaced by the value of 1-g for the same observation.
### Note
In cluster versions before 2.0.6, the algorithm entered an infinite loop in the boundary case of one variable, i.e., `ncol(x) == 1`, which currently signals an error (because the algorithm now in C, haes not correctly taken account of this special case).
### See Also
`<agnes>` for background and references; `<mona.object>`, `<plot.mona>`.
### Examples
```
data(animals)
ma <- mona(animals)
ma
## Plot similar to Figure 10 in Struyf et al (1996)
plot(ma)
## One place to see if/how error messages are *translated* (to 'de' / 'pl'):
ani.NA <- animals; ani.NA[4,] <- NA
aniNA <- within(animals, { end[2:9] <- NA })
aniN2 <- animals; aniN2[cbind(1:6, c(3, 1, 4:6, 2))] <- NA
ani.non2 <- within(animals, end[7] <- 3 )
ani.idNA <- within(animals, end[!is.na(end)] <- 1 )
try( mona(ani.NA) ) ## error: .. object with all values missing
try( mona(aniNA) ) ## error: .. more than half missing values
try( mona(aniN2) ) ## error: all have at least one missing
try( mona(ani.non2) ) ## error: all must be binary
try( mona(ani.idNA) ) ## error: ditto
```
r None
`partition.object` Partitioning Object
---------------------------------------
### Description
The objects of class `"partition"` represent a partitioning of a dataset into clusters.
### Value
a `"partition"` object is a list with the following (and typically more) components:
| | |
| --- | --- |
| `clustering` | the clustering vector. An integer vector of length *n*, the number of observations, giving for each observation the number ('id') of the cluster to which it belongs. |
| `call` | the matched `[call](../../base/html/call)` generating the object. |
| `silinfo` | a list with all *silhouette* information, only available when the number of clusters is non-trivial, i.e., *1 < k < n* and then has the following components, see `<silhouette>` widths
an (n x 3) matrix, as returned by `<silhouette>()`, with for each observation i the cluster to which i belongs, as well as the neighbor cluster of i (the cluster, not containing i, for which the average dissimilarity between its observations and i is minimal), and the silhouette width *s(i)* of the observation. clus.avg.widths
the average silhouette width per cluster. avg.width
the average silhouette width for the dataset, i.e., simply the average of *s(i)* over all observations *i*. This information is also needed to construct a *silhouette plot* of the clustering, see `<plot.partition>`. Note that `avg.width` can be maximized over different clusterings (e.g. with varying number of clusters) to choose an *optimal* clustering. |
| `objective` | value of criterion maximized during the partitioning algorithm, may more than one entry for different stages. |
| `diss` | an object of class `"dissimilarity"`, representing the total dissimilarity matrix of the dataset (or relevant subset, e.g. for `clara`). |
| `data` | a matrix containing the original or standardized data. This might be missing to save memory or when a dissimilarity matrix was given as input structure to the clustering method. |
### GENERATION
These objects are returned from `pam`, `clara` or `fanny`.
### METHODS
The `"partition"` class has a method for the following generic functions: `plot`, `clusplot`.
### INHERITANCE
The following classes inherit from class `"partition"` : `"pam"`, `"clara"` and `"fanny"`.
See `<pam.object>`, `<clara.object>` and `<fanny.object>` for details.
### See Also
`<pam>`, `<clara>`, `<fanny>`.
r None
`print.dissimilarity` Print and Summary Methods for Dissimilarity Objects
--------------------------------------------------------------------------
### Description
Print or summarize the distances and the attributes of a `dissimilarity` object.
These are methods for the functions `print()` and `summary()` for `dissimilarity` objects. See `print`, `print.default`, or `summary` for the general behavior of these.
### Usage
```
## S3 method for class 'dissimilarity'
print(x, diag = NULL, upper = NULL,
digits = getOption("digits"), justify = "none", right = TRUE, ...)
## S3 method for class 'dissimilarity'
summary(object,
digits = max(3, getOption("digits") - 2), ...)
## S3 method for class 'summary.dissimilarity'
print(x, ...)
```
### Arguments
| | |
| --- | --- |
| `x, object` | a `dissimilarity` object or a `summary.dissimilarity` one for `print.summary.dissimilarity()`. |
| `digits` | the number of digits to use, see `[print.default](../../base/html/print.default)`. |
| `diag, upper, justify, right` | optional arguments specifying how the triangular dissimilarity matrix is printed; see `[print.dist](../../stats/html/dist)`. |
| `...` | potential further arguments (require by generic). |
### See Also
`<daisy>`, `<dissimilarity.object>`, `[print](../../base/html/print)`, `[print.default](../../base/html/print.default)`, `[print.dist](../../stats/html/dist)`.
### Examples
```
## See example(daisy)
sd <- summary(daisy(matrix(rnorm(100), 20,5)))
sd # -> print.summary.dissimilarity(.)
str(sd)
```
r None
`clara.object` Clustering Large Applications (CLARA) Object
------------------------------------------------------------
### Description
The objects of class `"clara"` represent a partitioning of a large dataset into clusters and are typically returned from `<clara>`.
### Value
A legitimate `clara` object is a list with the following components:
| | |
| --- | --- |
| `sample` | labels or case numbers of the observations in the best sample, that is, the sample used by the `clara` algorithm for the final partition. |
| `medoids` | the medoids or representative objects of the clusters. It is a matrix with in each row the coordinates of one medoid. Possibly `NULL`, namely when the object resulted from `clara(*, medoids.x=FALSE)`. Use the following `i.med` in that case. |
| `i.med` | the *indices* of the `medoids` above: `medoids <- x[i.med,]` where `x` is the original data matrix in `clara(x,*)`. |
| `clustering` | the clustering vector, see `<partition.object>`. |
| `objective` | the objective function for the final clustering of the entire dataset. |
| `clusinfo` | matrix, each row gives numerical information for one cluster. These are the cardinality of the cluster (number of observations), the maximal and average dissimilarity between the observations in the cluster and the cluster's medoid. The last column is the maximal dissimilarity between the observations in the cluster and the cluster's medoid, divided by the minimal dissimilarity between the cluster's medoid and the medoid of any other cluster. If this ratio is small, the cluster is well-separated from the other clusters. |
| `diss` | dissimilarity (maybe NULL), see `<partition.object>`. |
| `silinfo` | list with silhouette width information for the best sample, see `<partition.object>`. |
| `call` | generating call, see `<partition.object>`. |
| `data` | matrix, possibibly standardized, or NULL, see `<partition.object>`. |
### Methods, Inheritance
The `"clara"` class has methods for the following generic functions: `print`, `summary`.
The class `"clara"` inherits from `"partition"`. Therefore, the generic functions `plot` and `clusplot` can be used on a `clara` object.
### See Also
`<clara>`, `<dissimilarity.object>`, `<partition.object>`, `<plot.partition>`.
r None
`print.clara` Print Method for CLARA Objects
---------------------------------------------
### Description
Prints the best sample, medoids, clustering vector and objective function of `clara` object.
This is a method for the function `[print](../../base/html/print)()` for objects inheriting from class `<clara>`.
### Usage
```
## S3 method for class 'clara'
print(x, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | a clara object. |
| `...` | potential further arguments (require by generic). |
### See Also
`<summary.clara>` producing more output; `<clara>`, `<clara.object>`, `[print](../../base/html/print)`, `[print.default](../../base/html/print.default)`.
r None
`agnes.object` Agglomerative Nesting (AGNES) Object
----------------------------------------------------
### Description
The objects of class `"agnes"` represent an agglomerative hierarchical clustering of a dataset.
### Value
A legitimate `agnes` object is a list with the following components:
| | |
| --- | --- |
| `order` | a vector giving a permutation of the original observations to allow for plotting, in the sense that the branches of a clustering tree will not cross. |
| `order.lab` | a vector similar to `order`, but containing observation labels instead of observation numbers. This component is only available if the original observations were labelled. |
| `height` | a vector with the distances between merging clusters at the successive stages. |
| `ac` | the agglomerative coefficient, measuring the clustering structure of the dataset. For each observation i, denote by m(i) its dissimilarity to the first cluster it is merged with, divided by the dissimilarity of the merger in the final step of the algorithm. The `ac` is the average of all 1 - m(i). It can also be seen as the average width (or the percentage filled) of the banner plot. Because `ac` grows with the number of observations, this measure should not be used to compare datasets of very different sizes. |
| `merge` | an (n-1) by 2 matrix, where n is the number of observations. Row i of `merge` describes the merging of clusters at step i of the clustering. If a number j in the row is negative, then the single observation |j| is merged at this stage. If j is positive, then the merger is with the cluster formed at stage j of the algorithm. |
| `diss` | an object of class `"dissimilarity"` (see `<dissimilarity.object>`), representing the total dissimilarity matrix of the dataset. |
| `data` | a matrix containing the original or standardized measurements, depending on the `stand` option of the function `agnes`. If a dissimilarity matrix was given as input structure, then this component is not available. |
### GENERATION
This class of objects is returned from `<agnes>`.
### METHODS
The `"agnes"` class has methods for the following generic functions: `print`, `summary`, `plot`, and `[as.dendrogram](../../stats/html/dendrogram)`.
In addition, `[cutree](../../stats/html/cutree)(x, *)` can be used to “cut” the dendrogram in order to produce cluster assignments.
### INHERITANCE
The class `"agnes"` inherits from `"twins"`. Therefore, the generic functions `<pltree>` and `[as.hclust](../../stats/html/as.hclust)` are available for `agnes` objects. After applying `as.hclust()`, all *its* methods are available, of course.
### See Also
`<agnes>`, `<diana>`, `[as.hclust](../../stats/html/as.hclust)`, `[hclust](../../stats/html/hclust)`, `<plot.agnes>`, `<twins.object>`.
`[cutree](../../stats/html/cutree)`.
### Examples
```
data(agriculture)
ag.ag <- agnes(agriculture)
class(ag.ag)
pltree(ag.ag) # the dendrogram
## cut the dendrogram -> get cluster assignments:
(ck3 <- cutree(ag.ag, k = 3))
(ch6 <- cutree(as.hclust(ag.ag), h = 6))
stopifnot(identical(unname(ch6), ck3))
```
| programming_docs |
r None
`clusplot.default` Bivariate Cluster Plot (clusplot) Default Method
--------------------------------------------------------------------
### Description
Creates a bivariate plot visualizing a partition (clustering) of the data. All observation are represented by points in the plot, using principal components or multidimensional scaling. Around each cluster an ellipse is drawn.
### Usage
```
## Default S3 method:
clusplot(x, clus, diss = FALSE,
s.x.2d = mkCheckX(x, diss), stand = FALSE,
lines = 2, shade = FALSE, color = FALSE,
labels= 0, plotchar = TRUE,
col.p = "dark green", col.txt = col.p,
col.clus = if(color) c(2, 4, 6, 3) else 5, cex = 1, cex.txt = cex,
span = TRUE,
add = FALSE,
xlim = NULL, ylim = NULL,
main = paste("CLUSPLOT(", deparse(substitute(x)),")"),
sub = paste("These two components explain",
round(100 * var.dec, digits = 2), "% of the point variability."),
xlab = "Component 1", ylab = "Component 2",
verbose = getOption("verbose"),
...)
```
### Arguments
| | |
| --- | --- |
| `x` | matrix or data frame, or dissimilarity matrix, depending on the value of the `diss` argument. In case of a matrix (alike), each row corresponds to an observation, and each column corresponds to a variable. All variables must be numeric. Missing values (`[NA](../../base/html/na)`s) are allowed. They are replaced by the median of the corresponding variable. When some variables or some observations contain only missing values, the function stops with a warning message. In case of a dissimilarity matrix, `x` is the output of `<daisy>` or `[dist](../../stats/html/dist)` or a symmetric matrix. Also, a vector of length *n\*(n-1)/2* is allowed (where *n* is the number of observations), and will be interpreted in the same way as the output of the above-mentioned functions. Missing values (NAs) are not allowed. |
| `clus` | a vector of length n representing a clustering of `x`. For each observation the vector lists the number or name of the cluster to which it has been assigned. `clus` is often the clustering component of the output of `<pam>`, `<fanny>` or `<clara>`. |
| `diss` | logical indicating if `x` will be considered as a dissimilarity matrix or a matrix of observations by variables (see `x` arugment above). |
| `s.x.2d` | a `[list](../../base/html/list)` with components named `x` (a *n x 2* matrix; typically something like principal components of original data), `labs` and `var.dec`. |
| | |
| --- | --- |
| `stand` | logical flag: if true, then the representations of the n observations in the 2-dimensional plot are standardized. |
| `lines` | integer out of `0, 1, 2`, used to obtain an idea of the distances between ellipses. The distance between two ellipses E1 and E2 is measured along the line connecting the centers *m1* and *m2* of the two ellipses. In case E1 and E2 overlap on the line through *m1* and *m2*, no line is drawn. Otherwise, the result depends on the value of `lines`: If lines = 0,
no distance lines will appear on the plot; lines = 1,
the line segment between *m1* and *m2* is drawn; lines = 2,
a line segment between the boundaries of E1 and E2 is drawn (along the line connecting *m1* and *m2*). |
| `shade` | logical flag: if TRUE, then the ellipses are shaded in relation to their density. The density is the number of points in the cluster divided by the area of the ellipse. |
| `color` | logical flag: if TRUE, then the ellipses are colored with respect to their density. With increasing density, the colors are light blue, light green, red and purple. To see these colors on the graphics device, an appropriate color scheme should be selected (we recommend a white background). |
| `labels` | integer code, currently one of 0,1,2,3,4 and 5. If labels= 0,
no labels are placed in the plot; labels= 1,
points and ellipses can be identified in the plot (see `[identify](../../graphics/html/identify)`); labels= 2,
all points and ellipses are labelled in the plot; labels= 3,
only the points are labelled in the plot; labels= 4,
only the ellipses are labelled in the plot. labels= 5,
the ellipses are labelled in the plot, and points can be identified. The levels of the vector `clus` are taken as labels for the clusters. The labels of the points are the rownames of `x` if `x` is matrix like. Otherwise (`diss = TRUE`), `x` is a vector, point labels can be attached to `x` as a "Labels" attribute (`attr(x,"Labels")`), as is done for the output of `<daisy>`. A possible `[names](../../base/html/names)` attribute of `clus` will not be taken into account. |
| `plotchar` | logical flag: if TRUE, then the plotting symbols differ for points belonging to different clusters. |
| `span` | logical flag: if TRUE, then each cluster is represented by the ellipse with smallest area containing all its points. (This is a special case of the minimum volume ellipsoid.) If FALSE, the ellipse is based on the mean and covariance matrix of the same points. While this is faster to compute, it often yields a much larger ellipse. There are also some special cases: When a cluster consists of only one point, a tiny circle is drawn around it. When the points of a cluster fall on a straight line, `span=FALSE` draws a narrow ellipse around it and `span=TRUE` gives the exact line segment. |
| `add` | logical indicating if ellipses (and labels if `labels` is true) should be *added* to an already existing plot. If false, neither a `[title](../../graphics/html/title)` or sub title, see `sub`, is written. |
| `col.p` | color code(s) used for the observation points. |
| `col.txt` | color code(s) used for the labels (if `labels >= 2`). |
| `col.clus` | color code for the ellipses (and their labels); only one if color is false (as per default). |
| `cex, cex.txt` | character **ex**pansion (size), for the point symbols and point labels, respectively. |
| `xlim, ylim` | numeric vectors of length 2, giving the x- and y- ranges as in `[plot.default](../../graphics/html/plot.default)`. |
| `main` | main title for the plot; by default, one is constructed. |
| `sub` | sub title for the plot; by default, one is constructed. |
| `xlab, ylab` | x- and y- axis labels for the plot, with defaults. |
| `verbose` | a logical indicating, if there should be extra diagnostic output; mainly for ‘debugging’. |
| `...` | Further graphical parameters may also be supplied, see `[par](../../graphics/html/par)`. |
### Details
`clusplot` uses function calls `[princomp](../../stats/html/princomp)(*, cor = (ncol(x) > 2))` or `[cmdscale](../../stats/html/cmdscale)(*, add=TRUE)`, respectively, depending on `diss` being false or true. These functions are data reduction techniques to represent the data in a bivariate plot.
Ellipses are then drawn to indicate the clusters. The further layout of the plot is determined by the optional arguments.
### Value
An invisible list with components:
| | |
| --- | --- |
| `Distances` | When `lines` is 1 or 2 we optain a k by k matrix (k is the number of clusters). The element in `[i,j]` is the distance between ellipse i and ellipse j. If `lines = 0`, then the value of this component is `NA`. |
| `Shading` | A vector of length k (where k is the number of clusters), containing the amount of shading per cluster. Let y be a vector where element i is the ratio between the number of points in cluster i and the area of ellipse i. When the cluster i is a line segment, y[i] and the density of the cluster are set to `NA`. Let z be the sum of all the elements of y without the NAs. Then we put shading = y/z \*37 + 3 . |
### Side Effects
a visual display of the clustering is plotted on the current graphics device.
### Note
When we have 4 or fewer clusters, then the `color=TRUE` gives every cluster a different color. When there are more than 4 clusters, clusplot uses the function `<pam>` to cluster the densities into 4 groups such that ellipses with nearly the same density get the same color. `col.clus` specifies the colors used.
The `col.p` and `col.txt` arguments, added for **R**, are recycled to have length the number of observations. If `col.p` has more than one value, using `color = TRUE` can be confusing because of a mix of point and ellipse colors.
### References
Pison, G., Struyf, A. and Rousseeuw, P.J. (1999) Displaying a Clustering with CLUSPLOT, *Computational Statistics and Data Analysis*, **30**, 381–392.
Kaufman, L. and Rousseeuw, P.J. (1990). *Finding Groups in Data: An Introduction to Cluster Analysis.* Wiley, New York.
Struyf, A., Hubert, M. and Rousseeuw, P.J. (1997). Integrating Robust Clustering Techniques in S-PLUS, *Computational Statistics and Data Analysis*, **26**, 17-37.
### See Also
`[princomp](../../stats/html/princomp)`, `[cmdscale](../../stats/html/cmdscale)`, `<pam>`, `<clara>`, `<daisy>`, `[par](../../graphics/html/par)`, `[identify](../../graphics/html/identify)`, `[cov.mve](../../mass/html/cov.rob)`, `<clusplot.partition>`.
### Examples
```
## plotting votes.diss(dissimilarity) in a bivariate plot and
## partitioning into 2 clusters
data(votes.repub)
votes.diss <- daisy(votes.repub)
pamv <- pam(votes.diss, 2, diss = TRUE)
clusplot(pamv, shade = TRUE)
## is the same as
votes.clus <- pamv$clustering
clusplot(votes.diss, votes.clus, diss = TRUE, shade = TRUE)
## Now look at components 3 and 2 instead of 1 and 2:
str(cMDS <- cmdscale(votes.diss, k=3, add=TRUE))
clusplot(pamv, s.x.2d = list(x=cMDS$points[, c(3,2)],
labs=rownames(votes.repub), var.dec=NA),
shade = TRUE, col.p = votes.clus,
sub="", xlab = "Component 3", ylab = "Component 2")
clusplot(pamv, col.p = votes.clus, labels = 4)# color points and label ellipses
# "simple" cheap ellipses: larger than minimum volume:
# here they are *added* to the previous plot:
clusplot(pamv, span = FALSE, add = TRUE, col.clus = "midnightblue")
## Setting a small *label* size:
clusplot(votes.diss, votes.clus, diss = TRUE, labels = 3, cex.txt = 0.6)
if(dev.interactive()) { # uses identify() *interactively* :
clusplot(votes.diss, votes.clus, diss = TRUE, shade = TRUE, labels = 1)
clusplot(votes.diss, votes.clus, diss = TRUE, labels = 5)# ident. only points
}
## plotting iris (data frame) in a 2-dimensional plot and partitioning
## into 3 clusters.
data(iris)
iris.x <- iris[, 1:4]
cl3 <- pam(iris.x, 3)$clustering
op <- par(mfrow= c(2,2))
clusplot(iris.x, cl3, color = TRUE)
U <- par("usr")
## zoom in :
rect(0,-1, 2,1, border = "orange", lwd=2)
clusplot(iris.x, cl3, color = TRUE, xlim = c(0,2), ylim = c(-1,1))
box(col="orange",lwd=2); mtext("sub region", font = 4, cex = 2)
## or zoom out :
clusplot(iris.x, cl3, color = TRUE, xlim = c(-4,4), ylim = c(-4,4))
mtext("'super' region", font = 4, cex = 2)
rect(U[1],U[3], U[2],U[4], lwd=2, lty = 3)
# reset graphics
par(op)
```
r None
`plantTraits` Plant Species Traits Data
----------------------------------------
### Description
This dataset constitutes a description of 136 plant species according to biological attributes (morphological or reproductive)
### Usage
```
data(plantTraits)
```
### Format
A data frame with 136 observations on the following 31 variables.
`pdias`
Diaspore mass (mg)
`longindex`
Seed bank longevity
`durflow`
Flowering duration
`height`
Plant height, an ordered factor with levels `1` < `2` < ... < `8`.
`begflow`
Time of first flowering, an ordered factor with levels `1` < `2` < `3` < `4` < `5` < `6` < `7` < `8` < `9`
`mycor`
Mycorrhizas, an ordered factor with levels `0`never < `1` sometimes< `2`always
`vegaer`
aerial vegetative propagation, an ordered factor with levels `0`never < `1` present but limited< `2`important.
`vegsout`
underground vegetative propagation, an ordered factor with 3 levels identical to `vegaer` above.
`autopoll`
selfing pollination, an ordered factor with levels `0`never < `1`rare < `2` often< the rule`3`
`insects`
insect pollination, an ordered factor with 5 levels `0` < ... < `4`.
`wind`
wind pollination, an ordered factor with 5 levels `0` < ... < `4`.
`lign`
a binary factor with levels `0:1`, indicating if plant is woody.
`piq`
a binary factor indicating if plant is thorny.
`ros`
a binary factor indicating if plant is rosette.
`semiros`
semi-rosette plant, a binary factor (`0`: no; `1`: yes).
`leafy`
leafy plant, a binary factor.
`suman`
summer annual, a binary factor.
`winan`
winter annual, a binary factor.
`monocarp`
monocarpic perennial, a binary factor.
`polycarp`
polycarpic perennial, a binary factor.
`seasaes`
seasonal aestival leaves, a binary factor.
`seashiv`
seasonal hibernal leaves, a binary factor.
`seasver`
seasonal vernal leaves, a binary factor.
`everalw`
leaves always evergreen, a binary factor.
`everparti`
leaves partially evergreen, a binary factor.
`elaio`
fruits with an elaiosome (dispersed by ants), a binary factor.
`endozoo`
endozoochorous fruits, a binary factor.
`epizoo`
epizoochorous fruits, a binary factor.
`aquat`
aquatic dispersal fruits, a binary factor.
`windgl`
wind dispersed fruits, a binary factor.
`unsp`
unspecialized mechanism of seed dispersal, a binary factor.
### Details
Most of factor attributes are not disjunctive. For example, a plant can be usually pollinated by insects but sometimes self-pollination can occured.
### Source
Vallet, Jeanne (2005) *Structuration de communautés végétales et analyse comparative de traits biologiques le long d'un gradient d'urbanisation*. Mémoire de Master 2 'Ecologie-Biodiversité-Evolution'; Université Paris Sud XI, 30p.+ annexes (in french)
### Examples
```
data(plantTraits)
## Calculation of a dissimilarity matrix
library(cluster)
dai.b <- daisy(plantTraits,
type = list(ordratio = 4:11, symm = 12:13, asymm = 14:31))
## Hierarchical classification
agn.trts <- agnes(dai.b, method="ward")
plot(agn.trts, which.plots = 2, cex= 0.6)
plot(agn.trts, which.plots = 1)
cutree6 <- cutree(agn.trts, k=6)
cutree6
## Principal Coordinate Analysis
cmdsdai.b <- cmdscale(dai.b, k=6)
plot(cmdsdai.b[, 1:2], asp = 1, col = cutree6)
```
r None
`summary.clara` Summary Method for 'clara' Objects
---------------------------------------------------
### Description
Returns (and prints) a summary list for a `clara` object. Printing gives more output than the corresponding `<print.clara>` method.
### Usage
```
## S3 method for class 'clara'
summary(object, ...)
## S3 method for class 'summary.clara'
print(x, ...)
```
### Arguments
| | |
| --- | --- |
| `x, object` | a `<clara>` object. |
| `...` | potential further arguments (require by generic). |
### See Also
`<clara.object>`
### Examples
```
## generate 2000 objects, divided into 5 clusters.
set.seed(47)
x <- rbind(cbind(rnorm(400, 0,4), rnorm(400, 0,4)),
cbind(rnorm(400,10,8), rnorm(400,40,6)),
cbind(rnorm(400,30,4), rnorm(400, 0,4)),
cbind(rnorm(400,40,4), rnorm(400,20,2)),
cbind(rnorm(400,50,4), rnorm(400,50,4))
)
clx5 <- clara(x, 5)
## Mis'classification' table:
table(rep(1:5, rep(400,5)), clx5$clust) # -> 1 "error"
summary(clx5)
## Graphically:
par(mfrow = c(3,1), mgp = c(1.5, 0.6, 0), mar = par("mar") - c(0,0,2,0))
plot(x, col = rep(2:6, rep(400,5)))
plot(clx5)
```
r None
`predict.ellipsoid` Predict Method for Ellipsoid Objects
---------------------------------------------------------
### Description
Compute points on the ellipsoid boundary, mostly for drawing.
### Usage
```
predict.ellipsoid(object, n.out=201, ...)
## S3 method for class 'ellipsoid'
predict(object, n.out=201, ...)
ellipsoidPoints(A, d2, loc, n.half = 201)
```
### Arguments
| | |
| --- | --- |
| `object` | an object of class `ellipsoid`, typically from `<ellipsoidhull>()`; alternatively any list-like object with proper components, see details below. |
| `n.out, n.half` | half the number of points to create. |
| `A, d2, loc` | arguments of the auxilary `ellipsoidPoints`, see below. |
| `...` | passed to and from methods. |
### Details
Note `ellipsoidPoints` is the workhorse function of `predict.ellipsoid` a standalone function and method for `ellipsoid` objects, see `<ellipsoidhull>`. The class of `object` is not checked; it must solely have valid components `loc` (length *p*), the *p x p* matrix `cov` (corresponding to `A`) and `d2` for the center, the shape (“covariance”) matrix and the squared average radius (or distance) or `[qchisq](../../stats/html/chisquare)(*, p)` quantile.
Unfortunately, this is only implemented for *p = 2*, currently; contributions for *p >= 3* are *very welcome*.
### Value
a numeric matrix of dimension `2*n.out` times *p*.
### See Also
`<ellipsoidhull>`, `<volume.ellipsoid>`.
### Examples
```
## see also example(ellipsoidhull)
## Robust vs. L.S. covariance matrix
set.seed(143)
x <- rt(200, df=3)
y <- 3*x + rt(200, df=2)
plot(x,y, main="non-normal data (N=200)")
mtext("with classical and robust cov.matrix ellipsoids")
X <- cbind(x,y)
C.ls <- cov(X) ; m.ls <- colMeans(X)
d2.99 <- qchisq(0.99, df = 2)
lines(ellipsoidPoints(C.ls, d2.99, loc=m.ls), col="green")
if(require(MASS)) {
Cxy <- cov.rob(cbind(x,y))
lines(ellipsoidPoints(Cxy$cov, d2 = d2.99, loc=Cxy$center), col="red")
}# MASS
```
r None
`ellipsoidhull` Compute the Ellipsoid Hull or Spanning Ellipsoid of a Point Set
--------------------------------------------------------------------------------
### Description
Compute the “ellipsoid hull” or “spanning ellipsoid”, i.e. the ellipsoid of minimal volume (‘area’ in 2D) such that all given points lie just inside or on the boundary of the ellipsoid.
### Usage
```
ellipsoidhull(x, tol=0.01, maxit=5000,
ret.wt = FALSE, ret.sqdist = FALSE, ret.pr = FALSE)
## S3 method for class 'ellipsoid'
print(x, digits = max(1, getOption("digits") - 2), ...)
```
### Arguments
| | |
| --- | --- |
| `x` | the *n* *p*-dimensional points asnumeric *n x p* matrix. |
| `tol` | convergence tolerance for Titterington's algorithm. Setting this to much smaller values may drastically increase the number of iterations needed, and you may want to increas `maxit` as well. |
| `maxit` | integer giving the maximal number of iteration steps for the algorithm. |
| `ret.wt, ret.sqdist, ret.pr` | logicals indicating if additional information should be returned, `ret.wt` specifying the *weights*, `ret.sqdist` the ***sq**uared **dist**ances* and `ret.pr` the final **pr**obabilities in the algorithms. |
| `digits,...` | the usual arguments to `[print](../../base/html/print)` methods. |
### Details
The “spanning ellipsoid” algorithm is said to stem from Titterington(1976), in Pison et al (1999) who use it for `<clusplot.default>`.
The problem can be seen as a special case of the “Min.Vol.” ellipsoid of which a more more flexible and general implementation is `[cov.mve](../../mass/html/cov.rob)` in the `MASS` package.
### Value
an object of class `"ellipsoid"`, basically a `[list](../../base/html/list)` with several components, comprising at least
| | |
| --- | --- |
| `cov` | *p x p* *covariance* matrix description the ellipsoid. |
| `loc` | *p*-dimensional location of the ellipsoid center. |
| `d2` | average squared radius. Further, *d2 = t^2*, where *t* is “the value of a t-statistic on the ellipse boundary” (from `[ellipse](../../ellipse/html/ellipse)` in the ellipse package), and hence, more usefully, `d2 = qchisq(alpha, df = p)`, where `alpha` is the confidence level for p-variate normally distributed data with location and covariance `loc` and `cov` to lie inside the ellipsoid. |
| `wt` | the vector of weights iff `ret.wt` was true. |
| `sqdist` | the vector of squared distances iff `ret.sqdist` was true. |
| `prob` | the vector of algorithm probabilities iff `ret.pr` was true. |
| `it` | number of iterations used. |
| `tol, maxit` | just the input argument, see above. |
| `eps` | the achieved tolerance which is the maximal squared radius minus *p*. |
| `ierr` | error code as from the algorithm; `0` means *ok*. |
| `conv` | logical indicating if the converged. This is defined as `it < maxit && ierr == 0`. |
### Author(s)
Martin Maechler did the present class implementation; Rousseeuw et al did the underlying original code.
### References
Pison, G., Struyf, A. and Rousseeuw, P.J. (1999) Displaying a Clustering with CLUSPLOT, *Computational Statistics and Data Analysis*, **30**, 381–392.
D.M. Titterington (1976) Algorithms for computing D-optimal design on finite design spaces. In *Proc.\ of the 1976 Conf.\ on Information Science and Systems*, 213–216; John Hopkins University.
### See Also
`<predict.ellipsoid>` which is also the `[predict](../../stats/html/predict)` method for `ellipsoid` objects. `<volume.ellipsoid>` for an example of ‘manual’ `ellipsoid` object construction;
further `[ellipse](../../ellipse/html/ellipse)` from package ellipse and `[ellipsePoints](../../sfsmisc/html/ellipsepoints)` from package sfsmisc.
`[chull](../../grdevices/html/chull)` for the convex hull, `[clusplot](clusplot.partition)` which makes use of this; `[cov.mve](../../mass/html/cov.rob)`.
### Examples
```
x <- rnorm(100)
xy <- unname(cbind(x, rnorm(100) + 2*x + 10))
exy. <- ellipsoidhull(xy)
exy. # >> calling print.ellipsoid()
plot(xy, main = "ellipsoidhull(<Gauss data>) -- 'spanning points'")
lines(predict(exy.), col="blue")
points(rbind(exy.$loc), col = "red", cex = 3, pch = 13)
exy <- ellipsoidhull(xy, tol = 1e-7, ret.wt = TRUE, ret.sq = TRUE)
str(exy) # had small 'tol', hence many iterations
(ii <- which(zapsmall(exy $ wt) > 1e-6))
## --> only about 4 to 6 "spanning ellipsoid" points
round(exy$wt[ii],3); sum(exy$wt[ii]) # weights summing to 1
points(xy[ii,], pch = 21, cex = 2,
col="blue", bg = adjustcolor("blue",0.25))
```
| programming_docs |
r None
`fanny.object` Fuzzy Analysis (FANNY) Object
---------------------------------------------
### Description
The objects of class `"fanny"` represent a fuzzy clustering of a dataset.
### Value
A legitimate `fanny` object is a list with the following components:
| | |
| --- | --- |
| `membership` | matrix containing the memberships for each pair consisting of an observation and a cluster. |
| `memb.exp` | the membership exponent used in the fitting criterion. |
| `coeff` | Dunn's partition coefficient *F(k)* of the clustering, where *k* is the number of clusters. *F(k)* is the sum of all *squared* membership coefficients, divided by the number of observations. Its value is between *1/k* and 1. The normalized form of the coefficient is also given. It is defined as *(F(k) - 1/k) / (1 - 1/k)*, and ranges between 0 and 1. A low value of Dunn's coefficient indicates a very fuzzy clustering, whereas a value close to 1 indicates a near-crisp clustering. |
| `clustering` | the clustering vector of the nearest crisp clustering, see `<partition.object>`. |
| `k.crisp` | integer (*<= k*) giving the number of *crisp* clusters; can be less than *k*, where it's recommended to decrease `memb.exp`. |
| `objective` | named vector containing the minimal value of the objective function reached by the FANNY algorithm and the relative convergence tolerance `tol` used. |
| `convergence` | named vector with `iterations`, the number of iterations needed and `converged` indicating if the algorithm converged (in `maxit` iterations within convergence tolerance `tol`). |
| `diss` | an object of class `"dissimilarity"`, see `<partition.object>`. |
| `call` | generating call, see `<partition.object>`. |
| `silinfo` | list with silhouette information of the nearest crisp clustering, see `<partition.object>`. |
| `data` | matrix, possibibly standardized, or NULL, see `<partition.object>`. |
### GENERATION
These objects are returned from `<fanny>`.
### METHODS
The `"fanny"` class has methods for the following generic functions: `print`, `summary`.
### INHERITANCE
The class `"fanny"` inherits from `"partition"`. Therefore, the generic functions `plot` and `clusplot` can be used on a `fanny` object.
### See Also
`<fanny>`, `<print.fanny>`, `<dissimilarity.object>`, `<partition.object>`, `<plot.partition>`.
r None
`clusplot.partition` Bivariate Cluster Plot (of a Partitioning Object)
-----------------------------------------------------------------------
### Description
Draws a 2-dimensional “clusplot” (clustering plot) on the current graphics device. The generic function has a default and a `partition` method.
### Usage
```
clusplot(x, ...)
## S3 method for class 'partition'
clusplot(x, main = NULL, dist = NULL, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | an **R** object, here, specifically an object of class `"partition"`, e.g. created by one of the functions `<pam>`, `<clara>`, or `<fanny>`. |
| `main` | title for the plot; when `NULL` (by default), a title is constructed, using `x$call`. |
| `dist` | when `x` does not have a `diss` nor a `data` component, e.g., for `<pam>(dist(*),
keep.diss=FALSE)`, `dist` must specify the dissimilarity for the clusplot. |
| `...` | optional arguments passed to methods, notably the `<clusplot.default>` method (except for the `diss` one) may also be supplied to this function. Many graphical parameters (see `[par](../../graphics/html/par)`) may also be supplied as arguments here. |
### Details
The `clusplot.partition()` method relies on `<clusplot.default>`.
If the clustering algorithms `pam`, `fanny` and `clara` are applied to a data matrix of observations-by-variables then a clusplot of the resulting clustering can always be drawn. When the data matrix contains missing values and the clustering is performed with `<pam>` or `<fanny>`, the dissimilarity matrix will be given as input to `clusplot`. When the clustering algorithm `<clara>` was applied to a data matrix with NAs then clusplot will replace the missing values as described in `<clusplot.default>`, because a dissimilarity matrix is not available.
### Value
For the `partition` (and `default`) method: An invisible list with components `Distances` and `Shading`, as for `<clusplot.default>`, see there.
### Side Effects
a 2-dimensional clusplot is created on the current graphics device.
### See Also
`<clusplot.default>` for references; `<partition.object>`, `<pam>`, `<pam.object>`, `<clara>`, `<clara.object>`, `<fanny>`, `<fanny.object>`, `[par](../../graphics/html/par)`.
### Examples
```
## For more, see ?clusplot.default
## generate 25 objects, divided into 2 clusters.
x <- rbind(cbind(rnorm(10,0,0.5), rnorm(10,0,0.5)),
cbind(rnorm(15,5,0.5), rnorm(15,5,0.5)))
clusplot(pam(x, 2))
## add noise, and try again :
x4 <- cbind(x, rnorm(25), rnorm(25))
clusplot(pam(x4, 2))
```
r None
`summary.pam` Summary Method for PAM Objects
---------------------------------------------
### Description
Summarize a `<pam>` object and return an object of class `summary.pam`. There's a `[print](../../base/html/print)` method for the latter.
### Usage
```
## S3 method for class 'pam'
summary(object, ...)
## S3 method for class 'summary.pam'
print(x, ...)
```
### Arguments
| | |
| --- | --- |
| `x, object` | a `<pam>` object. |
| `...` | potential further arguments (require by generic). |
### See Also
`<pam>`, `<pam.object>`.
r None
`clara` Clustering Large Applications
--------------------------------------
### Description
Computes a `"clara"` object, a `[list](../../base/html/list)` representing a clustering of the data into `k` clusters.
### Usage
```
clara(x, k, metric = c("euclidean", "manhattan", "jaccard"),
stand = FALSE, cluster.only = FALSE, samples = 5,
sampsize = min(n, 40 + 2 * k), trace = 0, medoids.x = TRUE,
keep.data = medoids.x, rngR = FALSE, pamLike = FALSE, correct.d = TRUE)
```
### Arguments
| | |
| --- | --- |
| `x` | data matrix or data frame, each row corresponds to an observation, and each column corresponds to a variable. All variables must be numeric. Missing values (NAs) are allowed. |
| `k` | integer, the number of clusters. It is required that *0 < k < n* where *n* is the number of observations (i.e., n = `nrow(x)`). |
| `metric` | character string specifying the metric to be used for calculating dissimilarities between observations. The currently available options are "euclidean", "manhattan", and "jaccard". Euclidean distances are root sum-of-squares of differences, and manhattan distances are the sum of absolute differences. |
| `stand` | logical, indicating if the measurements in `x` are standardized before calculating the dissimilarities. Measurements are standardized for each variable (column), by subtracting the variable's mean value and dividing by the variable's mean absolute deviation. |
| `cluster.only` | logical; if true, only the clustering will be computed and returned, see details. |
| `samples` | integer, say *N*, the number of samples to be drawn from the dataset. The default, `N = 5`, is rather small for historical (and now back compatibility) reasons and we *recommend to set `samples` an order of magnitude larger*. |
| `sampsize` | integer, say *j*, the number of observations in each sample. `sampsize` should be higher than the number of clusters (`k`) and at most the number of observations (*n =* `nrow(x)`). While computational effort is proportional to *j^2*, see note below, it may still be advisable to set *j =* `sampsize` to a *larger* value than the (historical) default. |
| `trace` | integer indicating a *trace level* for diagnostic output during the algorithm. |
| `medoids.x` | logical indicating if the medoids should be returned, identically to some rows of the input data `x`. If `FALSE`, `keep.data` must be false as well, and the medoid indices, i.e., row numbers of the medoids will still be returned (`i.med` component), and the algorithm saves space by needing one copy less of `x`. |
| `keep.data` | logical indicating if the (*scaled* if `stand` is true) data should be kept in the result. Setting this to `FALSE` saves memory (and hence time), but disables `[clusplot](clusplot.partition)()`ing of the result. Use `medoids.x = FALSE` to save even more memory. |
| `rngR` | logical indicating if **R**'s random number generator should be used instead of the primitive clara()-builtin one. If true, this also means that each call to `clara()` returns a different result – though only slightly different in good situations. |
| `pamLike` | logical indicating if the “swap” phase (see `<pam>`, in C code) should use the same algorithm as `<pam>()`. Note that from Kaufman and Rousseeuw's description this *should* have been true always, but as the original Fortran code and the subsequent port to C has always contained a small one-letter change (a typo according to Martin Maechler) with respect to PAM, the default, `pamLike = FALSE` has been chosen to remain back compatible rather than “PAM compatible”. |
| `correct.d` | logical or integer indicating that—only in the case of `NA`s present in `x`—the correct distance computation should be used instead of the wrong formula which has been present in the original Fortran code and been in use up to early 2016. Because the new correct formula is not back compatible, for the time being, a warning is signalled in this case, unless the user explicitly specifies `correct.d`. |
### Details
`clara` is fully described in chapter 3 of Kaufman and Rousseeuw (1990). Compared to other partitioning methods such as `pam`, it can deal with much larger datasets. Internally, this is achieved by considering sub-datasets of fixed size (`sampsize`) such that the time and storage requirements become linear in *n* rather than quadratic.
Each sub-dataset is partitioned into `k` clusters using the same algorithm as in `<pam>`.
Once `k` representative objects have been selected from the sub-dataset, each observation of the entire dataset is assigned to the nearest medoid.
The mean (equivalent to the sum) of the dissimilarities of the observations to their closest medoid is used as a measure of the quality of the clustering. The sub-dataset for which the mean (or sum) is minimal, is retained. A further analysis is carried out on the final partition.
Each sub-dataset is forced to contain the medoids obtained from the best sub-dataset until then. Randomly drawn observations are added to this set until `sampsize` has been reached.
When `cluster.only` is true, the result is simply a (possibly named) integer vector specifying the clustering, i.e.,
`clara(x,k, cluster.only=TRUE)` is the same as
`clara(x,k)$clustering` but computed more efficiently.
### Value
If `cluster.only` is false (as by default), an object of class `"clara"` representing the clustering. See `<clara.object>` for details.
If `cluster.only` is true, the result is the "clustering", an integer vector of length *n* with entries from `1:k`.
### Note
By default, the random sampling is implemented with a *very* simple scheme (with period *2^{16} = 65536*) inside the Fortran code, independently of **R**'s random number generation, and as a matter of fact, deterministically. Alternatively, we recommend setting `rngR = TRUE` which uses **R**'s random number generators. Then, `clara()` results are made reproducible typically by using `[set.seed](../../base/html/random)()` before calling `clara`.
The storage requirement of `clara` computation (for small `k`) is about *O(n \* p) + O(j^2)* where *j = \code{sampsize}*, and *(n,p) = \code{dim(x)}*. The CPU computing time (again assuming small `k`) is about *O(n \* p \* j^2 \* N)*, where *N = \code{samples}*.
For “small” datasets, the function `<pam>` can be used directly. What can be considered *small*, is really a function of available computing power, both memory (RAM) and speed. Originally (1990), “small” meant less than 100 observations; in 1997, the authors said *“small (say with fewer than 200 observations)”*; as of 2006, you can use `<pam>` with several thousand observations.
### Author(s)
Kaufman and Rousseeuw (see `<agnes>`), originally. Metric `"jaccard"`: Kamil Kozlowski (`@ownedoutcomes.com`) and Kamil Jadeszko. All arguments from `trace` on, and most **R** documentation and all tests by Martin Maechler.
### See Also
`<agnes>` for background and references; `<clara.object>`, `<pam>`, `<partition.object>`, `<plot.partition>`.
### Examples
```
## generate 500 objects, divided into 2 clusters.
x <- rbind(cbind(rnorm(200,0,8), rnorm(200,0,8)),
cbind(rnorm(300,50,8), rnorm(300,50,8)))
clarax <- clara(x, 2, samples=50)
clarax
clarax$clusinfo
## using pamLike=TRUE gives the same (apart from the 'call'):
all.equal(clarax[-8],
clara(x, 2, samples=50, pamLike = TRUE)[-8])
plot(clarax)
## cluster.only = TRUE -- save some memory/time :
clclus <- clara(x, 2, samples=50, cluster.only = TRUE)
stopifnot(identical(clclus, clarax$clustering))
## 'xclara' is an artificial data set with 3 clusters of 1000 bivariate
## objects each.
data(xclara)
(clx3 <- clara(xclara, 3))
## "better" number of samples
cl.3 <- clara(xclara, 3, samples=100)
## but that did not change the result here:
stopifnot(cl.3$clustering == clx3$clustering)
## Plot similar to Figure 5 in Struyf et al (1996)
## Not run: plot(clx3, ask = TRUE)
## Try 100 times *different* random samples -- for reliability:
nSim <- 100
nCl <- 3 # = no.classes
set.seed(421)# (reproducibility)
cl <- matrix(NA,nrow(xclara), nSim)
for(i in 1:nSim)
cl[,i] <- clara(xclara, nCl, medoids.x = FALSE, rngR = TRUE)$cluster
tcl <- apply(cl,1, tabulate, nbins = nCl)
## those that are not always in same cluster (5 out of 3000 for this seed):
(iDoubt <- which(apply(tcl,2, function(n) all(n < nSim))))
if(length(iDoubt)) { # (not for all seeds)
tabD <- tcl[,iDoubt, drop=FALSE]
dimnames(tabD) <- list(cluster = paste(1:nCl), obs = format(iDoubt))
t(tabD) # how many times in which clusters
}
```
r None
`votes.repub` Votes for Republican Candidate in Presidential Elections
-----------------------------------------------------------------------
### Description
A data frame with the percents of votes given to the republican candidate in presidential elections from 1856 to 1976. Rows represent the 50 states, and columns the 31 elections.
### Usage
```
data(votes.repub)
```
### Source
S. Peterson (1973): *A Statistical History of the American Presidential Elections*. New York: Frederick Ungar Publishing Co.
Data from 1964 to 1976 is from R. M. Scammon, *American Votes 12*, Congressional Quarterly.
r None
`mona.object` Monothetic Analysis (MONA) Object
------------------------------------------------
### Description
The objects of class `"mona"` represent the divisive hierarchical clustering of a dataset with only binary variables (measurements). This class of objects is returned from `<mona>`.
### Value
A legitimate `mona` object is a list with the following components:
| | |
| --- | --- |
| `data` | matrix with the same dimensions as the original data matrix, but with factors coded as 0 and 1, and all missing values replaced. |
| `order` | a vector giving a permutation of the original observations to allow for plotting, in the sense that the branches of a clustering tree will not cross. |
| `order.lab` | a vector similar to `order`, but containing observation labels instead of observation numbers. This component is only available if the original observations were labelled. |
| `variable` | vector of length n-1 where n is the number of observations, specifying the variables used to separate the observations of `order`. |
| `step` | vector of length n-1 where n is the number of observations, specifying the separation steps at which the observations of `order` are separated. |
### METHODS
The `"mona"` class has methods for the following generic functions: `print`, `summary`, `plot`.
### See Also
`<mona>` for examples etc, `<plot.mona>`.
r None
`clusGap` Gap Statistic for Estimating the Number of Clusters
--------------------------------------------------------------
### Description
`clusGap()` calculates a goodness of clustering measure, the “gap” statistic. For each number of clusters *k*, it compares *log(W(k))* with *E\*[log(W(k))]* where the latter is defined via bootstrapping, i.e., simulating from a reference (*H\_0*) distribution, a uniform distribution on the hypercube determined by the ranges of `x`, after first centering, and then `[svd](../../base/html/svd)` (aka ‘PCA’)-rotating them when (as by default) `spaceH0 = "scaledPCA"`.
`maxSE(f, SE.f)` determines the location of the **maximum** of `f`, taking a “1-SE rule” into account for the `*SE*` methods. The default method `"firstSEmax"` looks for the smallest *k* such that its value *f(k)* is not more than 1 standard error away from the first local maximum. This is similar but not the same as `"Tibs2001SEmax"`, Tibshirani et al's recommendation of determining the number of clusters from the gap statistics and their standard deviations.
### Usage
```
clusGap(x, FUNcluster, K.max, B = 100, d.power = 1,
spaceH0 = c("scaledPCA", "original"),
verbose = interactive(), ...)
maxSE(f, SE.f,
method = c("firstSEmax", "Tibs2001SEmax", "globalSEmax",
"firstmax", "globalmax"),
SE.factor = 1)
## S3 method for class 'clusGap'
print(x, method = "firstSEmax", SE.factor = 1, ...)
## S3 method for class 'clusGap'
plot(x, type = "b", xlab = "k", ylab = expression(Gap[k]),
main = NULL, do.arrows = TRUE,
arrowArgs = list(col="red3", length=1/16, angle=90, code=3), ...)
```
### Arguments
| | |
| --- | --- |
| `x` | numeric matrix or `[data.frame](../../base/html/data.frame)`. |
| `FUNcluster` | a `[function](../../base/html/function)` which accepts as first argument a (data) matrix like `x`, second argument, say *k, k >= 2*, the number of clusters desired, and returns a `[list](../../base/html/list)` with a component named (or shortened to) `cluster` which is a vector of length `n = nrow(x)` of integers in `1:k` determining the clustering or grouping of the `n` observations. |
| `K.max` | the maximum number of clusters to consider, must be at least two. |
| `B` | integer, number of Monte Carlo (“bootstrap”) samples. |
| `d.power` | a positive integer specifying the power *p* which is applied to the euclidean distances (`[dist](../../stats/html/dist)`) before they are summed up to give *W(k)*. The default, `d.power = 1`, corresponds to the “historical” **R** implementation, whereas `d.power = 2` corresponds to what Tibshirani et al had proposed. This was found by Juan Gonzalez, in 2016-02. |
| | |
| --- | --- |
| `spaceH0` | a `[character](../../base/html/character)` string specifying the space of the *H\_0* distribution (of *no* cluster). Both `"scaledPCA"` and `"original"` use a uniform distribution in a hyper cube and had been mentioned in the reference; `"original"` been added after a proposal (including code) by Juan Gonzalez. |
| `verbose` | integer or logical, determining if “progress” output should be printed. The default prints one bit per bootstrap sample. |
| `...` | (for `clusGap()`:) optionally further arguments for `FUNcluster()`, see `kmeans` example below. |
| `f` | numeric vector of ‘function values’, of length *K*, whose (“1 SE respected”) maximum we want. |
| `SE.f` | numeric vector of length *K* of standard errors of `f`. |
| `method` | character string indicating how the “optimal” number of clusters, *k^*, is computed from the gap statistics (and their standard deviations), or more generally how the location *k^* of the maximum of *f[k]* should be determined.
`"globalmax"`:
simply corresponds to the global maximum, i.e., is `which.max(f)`
`"firstmax"`:
gives the location of the first *local* maximum.
`"Tibs2001SEmax"`:
uses the criterion, Tibshirani et al (2001) proposed: “the smallest *k* such that *f(k) ≥ f(k+1) - s\_{k+1}*”. Note that this chooses *k = 1* when all standard deviations are larger than the differences *f(k+1) - f(k)*.
`"firstSEmax"`:
location of the first *f()* value which is not smaller than the first *local* maximum minus `SE.factor * SE.f[]`, i.e, within an “f S.E.” range of that maximum (see also `SE.factor`). This, the default, has been proposed by Martin Maechler in 2012, when adding `clusGap()` to the cluster package, after having seen the `"globalSEmax"` proposal (in code) and read the `"Tibs2001SEmax"` proposal.
`"globalSEmax"`:
(used in Dudoit and Fridlyand (2002), supposedly following Tibshirani's proposition): location of the first *f()* value which is not smaller than the *global* maximum minus `SE.factor * SE.f[]`, i.e, within an “f S.E.” range of that maximum (see also `SE.factor`). See the examples for a comparison in a simple case. |
| `SE.factor` | [When `method` contains `"SE"`] Determining the optimal number of clusters, Tibshirani et al. proposed the “1 S.E.”-rule. Using an `SE.factor` *f*, the “f S.E.”-rule is used, more generally. |
| | |
| --- | --- |
| `type, xlab, ylab, main` | arguments with the same meaning as in `[plot.default](../../graphics/html/plot.default)()`, with different default. |
| `do.arrows` | logical indicating if (1 SE -)“error bars” should be drawn, via `[arrows](../../graphics/html/arrows)()`. |
| `arrowArgs` | a list of arguments passed to `[arrows](../../graphics/html/arrows)()`; the default, notably `angle` and `code`, provide a style matching usual error bars. |
### Details
The main result `<res>$Tab[,"gap"]` of course is from bootstrapping aka Monte Carlo simulation and hence random, or equivalently, depending on the initial random seed (see `[set.seed](../../base/html/random)()`). On the other hand, in our experience, using `B = 500` gives quite precise results such that the gap plot is basically unchanged after an another run.
### Value
`clusGap(..)` returns an object of S3 class `"clusGap"`, basically a list with components
| | |
| --- | --- |
| `Tab` | a matrix with `K.max` rows and 4 columns, named "logW", "E.logW", "gap", and "SE.sim", where `gap = E.logW - logW`, and `SE.sim` corresponds to the standard error of `gap`, `SE.sim[k]=`*s[k]*, where *s[k] := sqrt(1 + 1/B) sd^\*(gap[])*, and *sd^\*()* is the standard deviation of the simulated (“bootstrapped”) gap values. |
| `call` | the `clusGap(..)` `[call](../../base/html/call)`. |
| `spaceH0` | the `spaceH0` argument (`[match.arg](../../base/html/match.arg)()`ed). |
| `n` | number of observations, i.e., `nrow(x)`. |
| `B` | input `B` |
| `FUNcluster` | input function `FUNcluster` |
### Author(s)
This function is originally based on the functions `gap` of (Bioconductor) package SAGx by Per Broberg, `gapStat()` from former package SLmisc by Matthias Kohl and ideas from `gap()` and its methods of package lga by Justin Harrington.
The current implementation is by Martin Maechler.
The implementation of `spaceH0 = "original"` is based on code proposed by Juan Gonzalez.
### References
Tibshirani, R., Walther, G. and Hastie, T. (2001). Estimating the number of data clusters via the Gap statistic. *Journal of the Royal Statistical Society B*, **63**, 411–423.
Tibshirani, R., Walther, G. and Hastie, T. (2000). Estimating the number of clusters in a dataset via the Gap statistic. Technical Report. Stanford.
Dudoit, S. and Fridlyand, J. (2002) A prediction-based resampling method for estimating the number of clusters in a dataset. *Genome Biology* **3**(7). doi: [10.1186/gb-2002-3-7-research0036](https://doi.org/10.1186/gb-2002-3-7-research0036)
Per Broberg (2006). SAGx: Statistical Analysis of the GeneChip. R package version 1.9.7. <http://www.bioconductor.org/packages/release/bioc/html/SAGx.html>
### See Also
`<silhouette>` for a much simpler less sophisticated goodness of clustering measure.
`[cluster.stats](../../fpc/html/cluster.stats)()` in package fpc for alternative measures.
### Examples
```
### --- maxSE() methods -------------------------------------------
(mets <- eval(formals(maxSE)$method))
fk <- c(2,3,5,4,7,8,5,4)
sk <- c(1,1,2,1,1,3,1,1)/2
## use plot.clusGap():
plot(structure(class="clusGap", list(Tab = cbind(gap=fk, SE.sim=sk))))
## Note that 'firstmax' and 'globalmax' are always at 3 and 6 :
sapply(c(1/4, 1,2,4), function(SEf)
sapply(mets, function(M) maxSE(fk, sk, method = M, SE.factor = SEf)))
### --- clusGap() -------------------------------------------------
## ridiculously nicely separated clusters in 3 D :
x <- rbind(matrix(rnorm(150, sd = 0.1), ncol = 3),
matrix(rnorm(150, mean = 1, sd = 0.1), ncol = 3),
matrix(rnorm(150, mean = 2, sd = 0.1), ncol = 3),
matrix(rnorm(150, mean = 3, sd = 0.1), ncol = 3))
## Slightly faster way to use pam (see below)
pam1 <- function(x,k) list(cluster = pam(x,k, cluster.only=TRUE))
## We do not recommend using hier.clustering here, but if you want,
## there is factoextra::hcut () or a cheap version of it
hclusCut <- function(x, k, d.meth = "euclidean", ...)
list(cluster = cutree(hclust(dist(x, method=d.meth), ...), k=k))
## You can manually set it before running this : doExtras <- TRUE # or FALSE
if(!(exists("doExtras") && is.logical(doExtras)))
doExtras <- cluster:::doExtras()
if(doExtras) {
## Note we use B = 60 in the following examples to keep them "speedy".
## ---- rather keep the default B = 500 for your analysis!
## note we can pass 'nstart = 20' to kmeans() :
gskmn <- clusGap(x, FUN = kmeans, nstart = 20, K.max = 8, B = 60)
gskmn #-> its print() method
plot(gskmn, main = "clusGap(., FUN = kmeans, n.start=20, B= 60)")
set.seed(12); system.time(
gsPam0 <- clusGap(x, FUN = pam, K.max = 8, B = 60)
)
set.seed(12); system.time(
gsPam1 <- clusGap(x, FUN = pam1, K.max = 8, B = 60)
)
## and show that it gives the "same":
not.eq <- c("call", "FUNcluster"); n <- names(gsPam0)
eq <- n[!(n %in% not.eq)]
stopifnot(identical(gsPam1[eq], gsPam0[eq]))
print(gsPam1, method="globalSEmax")
print(gsPam1, method="globalmax")
print(gsHc <- clusGap(x, FUN = hclusCut, K.max = 8, B = 60))
}# end {doExtras}
gs.pam.RU <- clusGap(ruspini, FUN = pam1, K.max = 8, B = 60)
gs.pam.RU
plot(gs.pam.RU, main = "Gap statistic for the 'ruspini' data")
mtext("k = 4 is best .. and k = 5 pretty close")
## This takes a minute..
## No clustering ==> k = 1 ("one cluster") should be optimal:
Z <- matrix(rnorm(256*3), 256,3)
gsP.Z <- clusGap(Z, FUN = pam1, K.max = 8, B = 200)
plot(gsP.Z, main = "clusGap(<iid_rnorm_p=3>) ==> k = 1 cluster is optimal")
gsP.Z
```
| programming_docs |
r None
`animals` Attributes of Animals
--------------------------------
### Description
This data set considers 6 binary attributes for 20 animals.
### Usage
```
data(animals)
```
### Format
A data frame with 20 observations on 6 variables:
| | | |
| --- | --- | --- |
| [ , 1] | war | warm-blooded |
| [ , 2] | fly | can fly |
| [ , 3] | ver | vertebrate |
| [ , 4] | end | endangered |
| [ , 5] | gro | live in groups |
| [ , 6] | hai | have hair |
| |
All variables are encoded as 1 = 'no', 2 = 'yes'.
### Details
This dataset is useful for illustrating monothetic (only a single variable is used for each split) hierarchical clustering.
### Source
Leonard Kaufman and Peter J. Rousseeuw (1990): *Finding Groups in Data* (pp 297ff). New York: Wiley.
### References
see Struyf, Hubert & Rousseeuw (1996), in `<agnes>`.
### Examples
```
data(animals)
apply(animals,2, table) # simple overview
ma <- mona(animals)
ma
## Plot similar to Figure 10 in Struyf et al (1996)
plot(ma)
```
r None
`pluton` Isotopic Composition Plutonium Batches
------------------------------------------------
### Description
The `pluton` data frame has 45 rows and 4 columns, containing percentages of isotopic composition of 45 Plutonium batches.
### Usage
```
data(pluton)
```
### Format
This data frame contains the following columns:
Pu238
the percentages of *(238)Pu*, always less than 2 percent.
Pu239
the percentages of *(239)Pu*, typically between 60 and 80 percent (from neutron capture of Uranium, *(238)U*).
Pu240
percentage of the plutonium 240 isotope.
Pu241
percentage of the plutonium 241 isotope.
### Details
Note that the percentage of plutonium~242 can be computed from the other four percentages, see the examples.
In the reference below it is explained why it is very desirable to combine these plutonium patches in three groups of similar size.
### Source
Available as ‘pluton.dat’ from the archive of the University of Antwerpen, ‘..../datasets/clusplot-examples.tar.gz’, no longer available.
### References
Rousseeuw, P.J. and Kaufman, L and Trauwaert, E. (1996) Fuzzy clustering using scatter matrices, *Computational Statistics and Data Analysis* **23**(1), 135–151.
### Examples
```
data(pluton)
hist(apply(pluton,1,sum), col = "gray") # between 94% and 100%
pu5 <- pluton
pu5$Pu242 <- 100 - apply(pluton,1,sum) # the remaining isotope.
pairs(pu5)
```
r None
`pam` Partitioning Around Medoids
----------------------------------
### Description
Partitioning (clustering) of the data into `k` clusters “around medoids”, a more robust version of K-means.
### Usage
```
pam(x, k, diss = inherits(x, "dist"),
metric = c("euclidean", "manhattan"),
medoids = if(is.numeric(nstart)) "random",
nstart = if(variant == "faster") 1 else NA,
stand = FALSE, cluster.only = FALSE,
do.swap = TRUE,
keep.diss = !diss && !cluster.only && n < 100,
keep.data = !diss && !cluster.only,
variant = c("original", "o_1", "o_2", "f_3", "f_4", "f_5", "faster"),
pamonce = FALSE, trace.lev = 0)
```
### Arguments
| | |
| --- | --- |
| `x` | data matrix or data frame, or dissimilarity matrix or object, depending on the value of the `diss` argument. In case of a matrix or data frame, each row corresponds to an observation, and each column corresponds to a variable. All variables must be numeric. Missing values (`[NA](../../base/html/na)`s) *are* allowed—as long as every pair of observations has at least one case not missing. In case of a dissimilarity matrix, `x` is typically the output of `<daisy>` or `[dist](../../stats/html/dist)`. Also a vector of length n\*(n-1)/2 is allowed (where n is the number of observations), and will be interpreted in the same way as the output of the above-mentioned functions. Missing values (`[NA](../../base/html/na)`s) are *not* allowed. |
| `k` | positive integer specifying the number of clusters, less than the number of observations. |
| `diss` | logical flag: if TRUE (default for `dist` or `dissimilarity` objects), then `x` will be considered as a dissimilarity matrix. If FALSE, then `x` will be considered as a matrix of observations by variables. |
| `metric` | character string specifying the metric to be used for calculating dissimilarities between observations. The currently available options are "euclidean" and "manhattan". Euclidean distances are root sum-of-squares of differences, and manhattan distances are the sum of absolute differences. If `x` is already a dissimilarity matrix, then this argument will be ignored. |
| `medoids` | NULL (default) or length-`k` vector of integer indices (in `1:n`) specifying initial medoids instead of using the ‘*build*’ algorithm. |
| `nstart` | used only when `medoids = "random"`: specifies the *number* of random “starts”; this argument corresponds to the one of `[kmeans](../../stats/html/kmeans)()` (from **R**'s package stats). |
| `stand` | logical; if true, the measurements in `x` are standardized before calculating the dissimilarities. Measurements are standardized for each variable (column), by subtracting the variable's mean value and dividing by the variable's mean absolute deviation. If `x` is already a dissimilarity matrix, then this argument will be ignored. |
| `cluster.only` | logical; if true, only the clustering will be computed and returned, see details. |
| `do.swap` | logical indicating if the **swap** phase should happen. The default, `TRUE`, correspond to the original algorithm. On the other hand, the **swap** phase is much more computer intensive than the **build** one for large *n*, so can be skipped by `do.swap = FALSE`. |
| `keep.diss, keep.data` | logicals indicating if the dissimilarities and/or input data `x` should be kept in the result. Setting these to `FALSE` can give much smaller results and hence even save memory allocation *time*. |
| `pamonce` | logical or integer in `0:6` specifying algorithmic short cuts as proposed by Reynolds et al. (2006), and Schubert and Rousseeuw (2019, 2021) see below. |
| `variant` | a `[character](../../base/html/character)` string denoting the variant of PAM algorithm to use; a more self-documenting version of `pamonce` which should be used preferably; note that `"faster"` not only uses `pamonce = 6` but also `nstart = 1` and hence `medoids = "random"` by default. |
| `trace.lev` | integer specifying a trace level for printing diagnostics during the build and swap phase of the algorithm. Default `0` does not print anything; higher values print increasingly more. |
### Details
The basic `pam` algorithm is fully described in chapter 2 of Kaufman and Rousseeuw(1990). Compared to the k-means approach in `kmeans`, the function `pam` has the following features: (a) it also accepts a dissimilarity matrix; (b) it is more robust because it minimizes a sum of dissimilarities instead of a sum of squared euclidean distances; (c) it provides a novel graphical display, the silhouette plot (see `plot.partition`) (d) it allows to select the number of clusters using `mean(<silhouette>(pr)[, "sil_width"])` on the result `pr <- pam(..)`, or directly its component `pr$silinfo$avg.width`, see also `<pam.object>`.
When `cluster.only` is true, the result is simply a (possibly named) integer vector specifying the clustering, i.e.,
`pam(x,k, cluster.only=TRUE)` is the same as
`pam(x,k)$clustering` but computed more efficiently.
The `pam`-algorithm is based on the search for `k` representative objects or medoids among the observations of the dataset. These observations should represent the structure of the data. After finding a set of `k` medoids, `k` clusters are constructed by assigning each observation to the nearest medoid. The goal is to find `k` representative objects which minimize the sum of the dissimilarities of the observations to their closest representative object.
By default, when `medoids` are not specified, the algorithm first looks for a good initial set of medoids (this is called the **build** phase). Then it finds a local minimum for the objective function, that is, a solution such that there is no single switch of an observation with a medoid (i.e. a ‘swap’) that will decrease the objective (this is called the **swap** phase).
When the `medoids` are specified (or randomly generated), their order does *not* matter; in general, the algorithms have been designed to not depend on the order of the observations.
The `pamonce` option, new in cluster 1.14.2 (Jan. 2012), has been proposed by Matthias Studer, University of Geneva, based on the findings by Reynolds et al. (2006) and was extended by Erich Schubert, TU Dortmund, with the FastPAM optimizations.
The default `FALSE` (or integer `0`) corresponds to the original “swap” algorithm, whereas `pamonce = 1` (or `TRUE`), corresponds to the first proposal .... and `pamonce = 2` additionally implements the second proposal as well.
The key ideas of ‘FastPAM’ (Schubert and Rousseeuw, 2019) are implemented except for the linear approximate build as follows:
`pamonce = 3`:
reduces the runtime by a factor of O(k) by exploiting that points cannot be closest to all current medoids at the same time.
`pamonce = 4`:
additionally allows executing multiple swaps per iteration, usually reducing the number of iterations.
`pamonce = 5`:
adds minor optimizations copied from the `pamonce = 2` approach, and is expected to be the fastest of the ‘FastPam’ variants included.
‘FasterPAM’ (Schubert and Rousseeuw, 2021) is implemented via
`pamonce = 6`:
execute each swap which improves results immediately, and hence typically multiple swaps per iteration; this swapping algorithm runs in *O(n^2)* rather than *O(n(n-k)k)* time which is much faster for all but small *k*.
In addition, ‘FasterPAM’ uses *random* initialization of the medoids (instead of the ‘*build*’ phase) to avoid the *O(n^2 k)* initialization cost of the build algorithm. In particular for large k, this yields a much faster algorithm, while preserving a similar result quality.
One may decide to use *repeated* random initialization by setting `nstart > 1`.
### Value
an object of class `"pam"` representing the clustering. See `?<pam.object>` for details.
### Note
For large datasets, `pam` may need too much memory or too much computation time since both are *O(n^2)*. Then, `<clara>()` is preferable, see its documentation.
There is hard limit currently, *n <= 65536*, at *2^{16}* because for larger *n*, *n(n-1)/2* is larger than the maximal integer (`[.Machine](../../base/html/zmachine)$integer.max` = *2^{31} - 1*).
### Author(s)
Kaufman and Rousseeuw's orginal Fortran code was translated to C and augmented in several ways, e.g. to allow `cluster.only=TRUE` or `do.swap=FALSE`, by Martin Maechler.
Matthias Studer, Univ.Geneva provided the `pamonce` (`1` and `2`) implementation.
Erich Schubert, TU Dortmund contributed the `pamonce` (`3` to `6`) implementation.
### References
Reynolds, A., Richards, G., de la Iglesia, B. and Rayward-Smith, V. (1992) Clustering rules: A comparison of partitioning and hierarchical clustering algorithms; *Journal of Mathematical Modelling and Algorithms* **5**, 475–504. doi: [10.1007/s10852-005-9022-1](https://doi.org/10.1007/s10852-005-9022-1).
Erich Schubert and Peter J. Rousseeuw (2019) Faster k-Medoids Clustering: Improving the PAM, CLARA, and CLARANS Algorithms; SISAP 2020, 171–187. doi: [10.1007/978-3-030-32047-8\_16](https://doi.org/10.1007/978-3-030-32047-8_16).
Erich Schubert and Peter J. Rousseeuw (2021) Fast and Eager k-Medoids Clustering: O(k) Runtime Improvement of the PAM, CLARA, and CLARANS Algorithms; Preprint, to appear in Information Systems (<https://arxiv.org/abs/2008.05171>).
### See Also
`<agnes>` for background and references; `<pam.object>`, `<clara>`, `<daisy>`, `<partition.object>`, `<plot.partition>`, `[dist](../../stats/html/dist)`.
### Examples
```
## generate 25 objects, divided into 2 clusters.
x <- rbind(cbind(rnorm(10,0,0.5), rnorm(10,0,0.5)),
cbind(rnorm(15,5,0.5), rnorm(15,5,0.5)))
pamx <- pam(x, 2)
pamx # Medoids: '7' and '25' ...
summary(pamx)
plot(pamx)
## use obs. 1 & 16 as starting medoids -- same result (typically)
(p2m <- pam(x, 2, medoids = c(1,16)))
## no _build_ *and* no _swap_ phase: just cluster all obs. around (1, 16):
p2.s <- pam(x, 2, medoids = c(1,16), do.swap = FALSE)
p2.s
p3m <- pam(x, 3, trace = 2)
## rather stupid initial medoids:
(p3m. <- pam(x, 3, medoids = 3:1, trace = 1))
pam(daisy(x, metric = "manhattan"), 2, diss = TRUE)
data(ruspini)
## Plot similar to Figure 4 in Stryuf et al (1996)
## Not run: plot(pam(ruspini, 4), ask = TRUE)
```
r None
`volume.ellipsoid` Compute the Volume (of an Ellipsoid)
--------------------------------------------------------
### Description
Compute the volume of geometric **R** object. This is a generic function and has a method for `ellipsoid` objects (typically resulting from `<ellipsoidhull>()`.
### Usage
```
volume(object, ...)
## S3 method for class 'ellipsoid'
volume(object, log = FALSE, ...)
```
### Arguments
| | |
| --- | --- |
| `object` | an **R** object the volume of which is wanted; for the `ellipsoid` method, an object of that class (see `<ellipsoidhull>` or the example below). |
| `log` | `[logical](../../base/html/logical)` indicating if the volume should be returned in log scale. Maybe needed in largish dimensions. |
| `...` | potential further arguments of methods, e.g. `log`. |
### Value
a number, the volume *V* (or *\log(V)* if `log = TRUE`) of the given `object`.
### Author(s)
Martin Maechler (2002, extracting from former `[clusplot](clusplot.partition)` code); Keefe Murphy (2019) provided code for dimensions *d > 2*.
### See Also
`<ellipsoidhull>` for spanning ellipsoid computation.
### Examples
```
## example(ellipsoidhull) # which defines 'ellipsoid' object <namefoo>
myEl <- structure(list(cov = rbind(c(3,1),1:2), loc = c(0,0), d2 = 10),
class = "ellipsoid")
volume(myEl)# i.e. "area" here (d = 2)
myEl # also mentions the "volume"
set.seed(1)
d5 <- matrix(rt(500, df=3), 100,5)
e5 <- ellipsoidhull(d5)
```
r None
`summary.mona` Summary Method for 'mona' Objects
-------------------------------------------------
### Description
Returns (and prints) a summary list for a `mona` object.
### Usage
```
## S3 method for class 'mona'
summary(object, ...)
## S3 method for class 'summary.mona'
print(x, ...)
```
### Arguments
| | |
| --- | --- |
| `x, object` | a `<mona>` object. |
| `...` | potential further arguments (require by generic). |
### See Also
`<mona>`, `<mona.object>`.
r None
`print.pam` Print Method for PAM Objects
-----------------------------------------
### Description
Prints the medoids, clustering vector and objective function of `pam` object.
This is a method for the function `[print](../../base/html/print)()` for objects inheriting from class `<pam>`.
### Usage
```
## S3 method for class 'pam'
print(x, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | a pam object. |
| `...` | potential further arguments (require by generic). |
### See Also
`<pam>`, `<pam.object>`, `[print](../../base/html/print)`, `[print.default](../../base/html/print.default)`.
r None
`plot.diana` Plots of a Divisive Hierarchical Clustering
---------------------------------------------------------
### Description
Creates plots for visualizing a `diana` object.
### Usage
```
## S3 method for class 'diana'
plot(x, ask = FALSE, which.plots = NULL, main = NULL,
sub = paste("Divisive Coefficient = ", round(x$dc, digits = 2)),
adj = 0, nmax.lab = 35, max.strlen = 5, xax.pretty = TRUE, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | an object of class `"diana"`, typically created by `<diana>(.)`. |
| `ask` | logical; if true and `which.plots` is `NULL`, `plot.diana` operates in interactive mode, via `[menu](../../utils/html/menu)`. |
| `which.plots` | integer vector or NULL (default), the latter producing both plots. Otherwise, `which.plots` must contain integers of `1` for a *banner* plot or `2` for a dendrogram or “clustering tree”. |
| `main, sub` | main and sub title for the plot, each with a convenient default. See documentation for these arguments in `[plot.default](../../graphics/html/plot.default)`. |
| `adj` | for label adjustment in `<bannerplot>()`. |
| `nmax.lab` | integer indicating the number of labels which is considered too large for single-name labelling the banner plot. |
| `max.strlen` | positive integer giving the length to which strings are truncated in banner plot labeling. |
| `xax.pretty` | logical or integer indicating if `[pretty](../../base/html/pretty)(*, n = xax.pretty)` should be used for the x axis. `xax.pretty = FALSE` is for back compatibility. |
| `...` | graphical parameters (see `[par](../../graphics/html/par)`) may also be supplied and are passed to `<bannerplot>()` or `<pltree>()`, respectively. |
### Details
When `ask = TRUE`, rather than producing each plot sequentially, `plot.diana` displays a menu listing all the plots that can be produced. If the menu is not desired but a pause between plots is still wanted one must set `par(ask= TRUE)` before invoking the plot command.
The banner displays the hierarchy of clusters, and is equivalent to a tree. See Rousseeuw (1986) or chapter 6 of Kaufman and Rousseeuw (1990). The banner plots the diameter of each cluster being splitted. The observations are listed in the order found by the `diana` algorithm, and the numbers in the `height` vector are represented as bars between the observations.
The leaves of the clustering tree are the original observations. A branch splits up at the diameter of the cluster being splitted.
### Side Effects
An appropriate plot is produced on the current graphics device. This can be one or both of the following choices:
Banner
Clustering tree
### Note
In the banner plot, observation labels are only printed when the number of observations is limited less than `nmax.lab` (35, by default), for readability. Moreover, observation labels are truncated to maximally `max.strlen` (5) characters.
### References
see those in `<plot.agnes>`.
### See Also
`<diana>`, `[diana.object](diana)`, `<twins.object>`, `[par](../../graphics/html/par)`.
### Examples
```
example(diana)# -> dv <- diana(....)
plot(dv, which = 1, nmax.lab = 100)
## wider labels :
op <- par(mar = par("mar") + c(0, 2, 0,0))
plot(dv, which = 1, nmax.lab = 100, max.strlen = 12)
par(op)
```
r None
`print.mona` Print Method for MONA Objects
-------------------------------------------
### Description
Prints the ordering of objects, separation steps, and used variables of a `mona` object.
This is a method for the function `[print](../../base/html/print)()` for objects inheriting from class `<mona>`.
### Usage
```
## S3 method for class 'mona'
print(x, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | a mona object. |
| `...` | potential further arguments (require by generic). |
### See Also
`<mona>`, `<mona.object>`, `[print](../../base/html/print)`, `[print.default](../../base/html/print.default)`.
r None
`plot.mona` Banner of Monothetic Divisive Hierarchical Clusterings
-------------------------------------------------------------------
### Description
Creates the banner of a `mona` object.
### Usage
```
## S3 method for class 'mona'
plot(x, main = paste("Banner of ", deparse(x$call)),
sub = NULL, xlab = "Separation step",
col = c(2,0), axes = TRUE, adj = 0,
nmax.lab = 35, max.strlen = 5, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | an object of class `"mona"`, typically created by `<mona>(.)`. |
| `main,sub` | main and sub titles for the plot, with convenient defaults. See documentation in `[plot.default](../../graphics/html/plot.default)`. |
| `xlab` | x axis label, see `[title](../../graphics/html/title)`. |
| `col,adj` | graphical parameters passed to `<bannerplot>()`. |
| `axes` | logical, indicating if (labeled) axes should be drawn. |
| `nmax.lab` | integer indicating the number of labels which is considered too large for labeling. |
| `max.strlen` | positive integer giving the length to which strings are truncated in labeling. |
| `...` | further graphical arguments are passed to `<bannerplot>()` and `[text](../../graphics/html/text)`. |
### Details
Plots the separation step at which clusters are splitted. The observations are given in the order found by the `mona` algorithm, the numbers in the `step` vector are represented as bars between the observations.
When a long bar is drawn between two observations, those observations have the same value for each variable. See chapter 7 of Kaufman and Rousseeuw (1990).
### Side Effects
A banner is plotted on the current graphics device.
### Note
In the banner plot, observation labels are only printed when the number of observations is limited less than `nmax.lab` (35, by default), for readability. Moreover, observation labels are truncated to maximally `max.strlen` (5) characters.
### References
see those in `<plot.agnes>`.
### See Also
`<mona>`, `<mona.object>`, `[par](../../graphics/html/par)`.
| programming_docs |
r None
`plot.agnes` Plots of an Agglomerative Hierarchical Clustering
---------------------------------------------------------------
### Description
Creates plots for visualizing an `agnes` object.
### Usage
```
## S3 method for class 'agnes'
plot(x, ask = FALSE, which.plots = NULL, main = NULL,
sub = paste("Agglomerative Coefficient = ",round(x$ac, digits = 2)),
adj = 0, nmax.lab = 35, max.strlen = 5, xax.pretty = TRUE, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | an object of class `"agnes"`, typically created by `<agnes>(.)`. |
| `ask` | logical; if true and `which.plots` is `NULL`, `plot.agnes` operates in interactive mode, via `[menu](../../utils/html/menu)`. |
| `which.plots` | integer vector or NULL (default), the latter producing both plots. Otherwise, `which.plots` must contain integers of `1` for a *banner* plot or `2` for a dendrogram or “clustering tree”. |
| `main, sub` | main and sub title for the plot, with convenient defaults. See documentation for these arguments in `[plot.default](../../graphics/html/plot.default)`. |
| `adj` | for label adjustment in `<bannerplot>()`. |
| `nmax.lab` | integer indicating the number of labels which is considered too large for single-name labelling the banner plot. |
| `max.strlen` | positive integer giving the length to which strings are truncated in banner plot labeling. |
| `xax.pretty` | logical or integer indicating if `[pretty](../../base/html/pretty)(*, n = xax.pretty)` should be used for the x axis. `xax.pretty = FALSE` is for back compatibility. |
| `...` | graphical parameters (see `[par](../../graphics/html/par)`) may also be supplied and are passed to `<bannerplot>()` or `pltree()` (see `[pltree.twins](pltree)`), respectively. |
### Details
When `ask = TRUE`, rather than producing each plot sequentially, `plot.agnes` displays a menu listing all the plots that can be produced. If the menu is not desired but a pause between plots is still wanted one must set `par(ask= TRUE)` before invoking the plot command.
The banner displays the hierarchy of clusters, and is equivalent to a tree. See Rousseeuw (1986) or chapter 5 of Kaufman and Rousseeuw (1990). The banner plots distances at which observations and clusters are merged. The observations are listed in the order found by the `agnes` algorithm, and the numbers in the `height` vector are represented as bars between the observations.
The leaves of the clustering tree are the original observations. Two branches come together at the distance between the two clusters being merged.
For more customization of the plots, rather call `<bannerplot>` and `pltree()`, i.e., its method `[pltree.twins](pltree)`, respectively.
directly with corresponding arguments, e.g., `xlab` or `ylab`.
### Side Effects
Appropriate plots are produced on the current graphics device. This can be one or both of the following choices:
Banner
Clustering tree
### Note
In the banner plot, observation labels are only printed when the number of observations is limited less than `nmax.lab` (35, by default), for readability. Moreover, observation labels are truncated to maximally `max.strlen` (5) characters.
For the dendrogram, more flexibility than via `pltree()` is provided by `dg <- [as.dendrogram](../../stats/html/dendrogram)(x)` and plotting `dg` via `[plot.dendrogram](../../stats/html/dendrogram)`.
### References
Kaufman, L. and Rousseeuw, P.J. (1990) *Finding Groups in Data: An Introduction to Cluster Analysis*. Wiley, New York.
Rousseeuw, P.J. (1986). A visual display for hierarchical classification, in *Data Analysis and Informatics 4*; edited by E. Diday, Y. Escoufier, L. Lebart, J. Pages, Y. Schektman, and R. Tomassone. North-Holland, Amsterdam, 743–748.
Struyf, A., Hubert, M. and Rousseeuw, P.J. (1997) Integrating Robust Clustering Techniques in S-PLUS, *Computational Statistics and Data Analysis*, **26**, 17–37.
### See Also
`<agnes>` and `<agnes.object>`; `<bannerplot>`, `[pltree.twins](pltree)`, and `[par](../../graphics/html/par)`.
### Examples
```
## Can also pass 'labels' to pltree() and bannerplot():
data(iris)
cS <- as.character(Sp <- iris$Species)
cS[Sp == "setosa"] <- "S"
cS[Sp == "versicolor"] <- "V"
cS[Sp == "virginica"] <- "g"
ai <- agnes(iris[, 1:4])
plot(ai, labels = cS, nmax = 150)# bannerplot labels are mess
```
r None
`plot.partition` Plot of a Partition of the Data Set
-----------------------------------------------------
### Description
Creates plots for visualizing a `partition` object.
### Usage
```
## S3 method for class 'partition'
plot(x, ask = FALSE, which.plots = NULL,
nmax.lab = 40, max.strlen = 5, data = x$data, dist = NULL,
stand = FALSE, lines = 2,
shade = FALSE, color = FALSE, labels = 0, plotchar = TRUE,
span = TRUE, xlim = NULL, ylim = NULL, main = NULL, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | an object of class `"partition"`, typically created by the functions `<pam>`, `<clara>`, or `<fanny>`. |
| `ask` | logical; if true and `which.plots` is `NULL`, `plot.partition` operates in interactive mode, via `[menu](../../utils/html/menu)`. |
| `which.plots` | integer vector or NULL (default), the latter producing both plots. Otherwise, `which.plots` must contain integers of `1` for a *clusplot* or `2` for *silhouette*. |
| `nmax.lab` | integer indicating the number of labels which is considered too large for single-name labeling the silhouette plot. |
| `max.strlen` | positive integer giving the length to which strings are truncated in silhouette plot labeling. |
| `data` | numeric matrix with the scaled data; per default taken from the partition object `x`, but can be specified explicitly. |
| `dist` | when `x` does not have a `diss` component as for `<pam>(*, keep.diss=FALSE)`, `dist` must be the dissimilarity if a clusplot is desired. |
| `stand,lines,shade,color,labels,plotchar,span,xlim,ylim,main, ...` | All optional arguments available for the `<clusplot.default>` function (except for the `diss` one) and graphical parameters (see `[par](../../graphics/html/par)`) may also be supplied as arguments to this function. |
### Details
When `ask= TRUE`, rather than producing each plot sequentially, `plot.partition` displays a menu listing all the plots that can be produced. If the menu is not desired but a pause between plots is still wanted, call `par(ask= TRUE)` before invoking the plot command.
The *clusplot* of a cluster partition consists of a two-dimensional representation of the observations, in which the clusters are indicated by ellipses (see `<clusplot.partition>` for more details).
The *silhouette plot* of a nonhierarchical clustering is fully described in Rousseeuw (1987) and in chapter 2 of Kaufman and Rousseeuw (1990). For each observation i, a bar is drawn, representing its silhouette width s(i), see `<silhouette>` for details. Observations are grouped per cluster, starting with cluster 1 at the top. Observations with a large s(i) (almost 1) are very well clustered, a small s(i) (around 0) means that the observation lies between two clusters, and observations with a negative s(i) are probably placed in the wrong cluster.
A clustering can be performed for several values of `k` (the number of clusters). Finally, choose the value of `k` with the largest overall average silhouette width.
### Side Effects
An appropriate plot is produced on the current graphics device. This can be one or both of the following choices:
Clusplot
Silhouette plot
### Note
In the silhouette plot, observation labels are only printed when the number of observations is less than `nmax.lab` (40, by default), for readability. Moreover, observation labels are truncated to maximally `max.strlen` (5) characters.
For more flexibility, use `plot(silhouette(x), ...)`, see `[plot.silhouette](silhouette)`.
### References
Rousseeuw, P.J. (1987) Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. *J. Comput. Appl. Math.*, **20**, 53–65.
Further, the references in `<plot.agnes>`.
### See Also
`<partition.object>`, `<clusplot.partition>`, `<clusplot.default>`, `<pam>`, `<pam.object>`, `<clara>`, `<clara.object>`, `<fanny>`, `<fanny.object>`, `[par](../../graphics/html/par)`.
### Examples
```
## generate 25 objects, divided into 2 clusters.
x <- rbind(cbind(rnorm(10,0,0.5), rnorm(10,0,0.5)),
cbind(rnorm(15,5,0.5), rnorm(15,5,0.5)))
plot(pam(x, 2))
## Save space not keeping data in clus.object, and still clusplot() it:
data(xclara)
cx <- clara(xclara, 3, keep.data = FALSE)
cx$data # is NULL
plot(cx, data = xclara)
```
r None
`pltree` Plot Clustering Tree of a Hierarchical Clustering
-----------------------------------------------------------
### Description
`pltree()` Draws a clustering tree (“dendrogram”) on the current graphics device. We provide the `twins` method draws the tree of a `twins` object, i.e., hierarchical clustering, typically resulting from `<agnes>()` or `<diana>()`.
### Usage
```
pltree(x, ...)
## S3 method for class 'twins'
pltree(x, main = paste("Dendrogram of ", deparse(x$call)),
labels = NULL, ylab = "Height", ...)
```
### Arguments
| | |
| --- | --- |
| `x` | in general, an **R** object for which a `pltree` method is defined; specifically, an object of class `"twins"`, typically created by either `<agnes>()` or `<diana>()`. |
| `main` | main title with a sensible default. |
| `labels` | labels to use; the default is constructed from `x`. |
| `ylab` | label for y-axis. |
| `...` | graphical parameters (see `[par](../../graphics/html/par)`) may also be supplied as arguments to this function. |
### Details
Creates a plot of a clustering tree given a `twins` object. The leaves of the tree are the original observations. In case of an agglomerative clustering, two branches come together at the distance between the two clusters being merged. For a divisive clustering, a branch splits up at the diameter of the cluster being splitted.
Note that currently the method function simply calls `plot([as.hclust](../../stats/html/as.hclust)(x), ...)`, which dispatches to `[plot.hclust](../../stats/html/hclust)(..)`. If more flexible plots are needed, consider `xx <- [as.dendrogram](../../stats/html/dendrogram)([as.hclust](../../stats/html/as.hclust)(x))` and plotting `xx`, see `[plot.dendrogram](../../stats/html/dendrogram)`.
### Value
a NULL value is returned.
### See Also
`<agnes>`, `<agnes.object>`, `<diana>`, `[diana.object](diana)`, `[hclust](../../stats/html/hclust)`, `[par](../../graphics/html/par)`, `<plot.agnes>`, `<plot.diana>`.
### Examples
```
data(votes.repub)
agn <- agnes(votes.repub)
pltree(agn)
dagn <- as.dendrogram(as.hclust(agn))
dagn2 <- as.dendrogram(as.hclust(agn), hang = 0.2)
op <- par(mar = par("mar") + c(0,0,0, 2)) # more space to the right
plot(dagn2, horiz = TRUE)
plot(dagn, horiz = TRUE, center = TRUE,
nodePar = list(lab.cex = 0.6, lab.col = "forest green", pch = NA),
main = deparse(agn$call))
par(op)
```
r None
`pam.object` Partitioning Around Medoids (PAM) Object
------------------------------------------------------
### Description
The objects of class `"pam"` represent a partitioning of a dataset into clusters.
### Value
A legitimate `pam` object is a `[list](../../base/html/list)` with the following components:
| | |
| --- | --- |
| `medoids` | the medoids or representative objects of the clusters. If a dissimilarity matrix was given as input to `pam`, then a vector of numbers or labels of observations is given, else `medoids` is a `[matrix](../../base/html/matrix)` with in each row the coordinates of one medoid. |
| `id.med` | integer vector of *indices* giving the medoid observation numbers. |
| `clustering` | the clustering vector, see `<partition.object>`. |
| `objective` | the objective function after the first and second step of the `pam` algorithm. |
| `isolation` | vector with length equal to the number of clusters, specifying which clusters are isolated clusters (L- or L\*-clusters) and which clusters are not isolated. A cluster is an L\*-cluster iff its diameter is smaller than its separation. A cluster is an L-cluster iff for each observation i the maximal dissimilarity between i and any other observation of the cluster is smaller than the minimal dissimilarity between i and any observation of another cluster. Clearly each L\*-cluster is also an L-cluster. |
| `clusinfo` | matrix, each row gives numerical information for one cluster. These are the cardinality of the cluster (number of observations), the maximal and average dissimilarity between the observations in the cluster and the cluster's medoid, the diameter of the cluster (maximal dissimilarity between two observations of the cluster), and the separation of the cluster (minimal dissimilarity between an observation of the cluster and an observation of another cluster). |
| `silinfo` | list with silhouette width information, see `<partition.object>`. |
| `diss` | dissimilarity (maybe NULL), see `<partition.object>`. |
| `call` | generating call, see `<partition.object>`. |
| `data` | (possibibly standardized) see `<partition.object>`. |
### GENERATION
These objects are returned from `<pam>`.
### METHODS
The `"pam"` class has methods for the following generic functions: `print`, `summary`.
### INHERITANCE
The class `"pam"` inherits from `"partition"`. Therefore, the generic functions `plot` and `clusplot` can be used on a `pam` object.
### See Also
`<pam>`, `<dissimilarity.object>`, `<partition.object>`, `<plot.partition>`.
### Examples
```
## Use the silhouette widths for assessing the best number of clusters,
## following a one-dimensional example from Christian Hennig :
##
x <- c(rnorm(50), rnorm(50,mean=5), rnorm(30,mean=15))
asw <- numeric(20)
## Note that "k=1" won't work!
for (k in 2:20)
asw[k] <- pam(x, k) $ silinfo $ avg.width
k.best <- which.max(asw)
cat("silhouette-optimal number of clusters:", k.best, "\n")
plot(1:20, asw, type= "h", main = "pam() clustering assessment",
xlab= "k (# clusters)", ylab = "average silhouette width")
axis(1, k.best, paste("best",k.best,sep="\n"), col = "red", col.axis = "red")
```
r None
`diana` DIvisive ANAlysis Clustering
-------------------------------------
### Description
Computes a divisive hierarchical clustering of the dataset returning an object of class `diana`.
### Usage
```
diana(x, diss = inherits(x, "dist"), metric = "euclidean", stand = FALSE,
stop.at.k = FALSE,
keep.diss = n < 100, keep.data = !diss, trace.lev = 0)
```
### Arguments
| | |
| --- | --- |
| `x` | data matrix or data frame, or dissimilarity matrix or object, depending on the value of the `diss` argument. In case of a matrix or data frame, each row corresponds to an observation, and each column corresponds to a variable. All variables must be numeric. Missing values (`[NA](../../base/html/na)`s) *are* allowed. In case of a dissimilarity matrix, `x` is typically the output of `<daisy>` or `[dist](../../stats/html/dist)`. Also a vector of length n\*(n-1)/2 is allowed (where n is the number of observations), and will be interpreted in the same way as the output of the above-mentioned functions. Missing values (NAs) are *not* allowed. |
| `diss` | logical flag: if TRUE (default for `dist` or `dissimilarity` objects), then `x` will be considered as a dissimilarity matrix. If FALSE, then `x` will be considered as a matrix of observations by variables. |
| `metric` | character string specifying the metric to be used for calculating dissimilarities between observations. The currently available options are "euclidean" and "manhattan". Euclidean distances are root sum-of-squares of differences, and manhattan distances are the sum of absolute differences. If `x` is already a dissimilarity matrix, then this argument will be ignored. |
| `stand` | logical; if true, the measurements in `x` are standardized before calculating the dissimilarities. Measurements are standardized for each variable (column), by subtracting the variable's mean value and dividing by the variable's mean absolute deviation. If `x` is already a dissimilarity matrix, then this argument will be ignored. |
| `stop.at.k` | logical or integer, `FALSE` by default. Otherwise must be integer, say *k*, in *\{1,2,..,n\}*, specifying that the `diana` algorithm should stop early. Non-default NOT YET IMPLEMENTED. |
| `keep.diss, keep.data` | logicals indicating if the dissimilarities and/or input data `x` should be kept in the result. Setting these to `FALSE` can give much smaller results and hence even save memory allocation *time*. |
| `trace.lev` | integer specifying a trace level for printing diagnostics during the algorithm. Default `0` does not print anything; higher values print increasingly more. |
### Details
`diana` is fully described in chapter 6 of Kaufman and Rousseeuw (1990). It is probably unique in computing a divisive hierarchy, whereas most other software for hierarchical clustering is agglomerative. Moreover, `diana` provides (a) the divisive coefficient (see `diana.object`) which measures the amount of clustering structure found; and (b) the banner, a novel graphical display (see `plot.diana`).
The `diana`-algorithm constructs a hierarchy of clusterings, starting with one large cluster containing all n observations. Clusters are divided until each cluster contains only a single observation.
At each stage, the cluster with the largest diameter is selected. (The diameter of a cluster is the largest dissimilarity between any two of its observations.)
To divide the selected cluster, the algorithm first looks for its most disparate observation (i.e., which has the largest average dissimilarity to the other observations of the selected cluster). This observation initiates the "splinter group". In subsequent steps, the algorithm reassigns observations that are closer to the "splinter group" than to the "old party". The result is a division of the selected cluster into two new clusters.
### Value
an object of class `"diana"` representing the clustering; this class has methods for the following generic functions: `print`, `summary`, `plot`.
Further, the class `"diana"` inherits from `"twins"`. Therefore, the generic function `<pltree>` can be used on a `diana` object, and `[as.hclust](../../stats/html/as.hclust)` and `[as.dendrogram](../../stats/html/dendrogram)` methods are available.
A legitimate `diana` object is a list with the following components:
| | |
| --- | --- |
| `order` | a vector giving a permutation of the original observations to allow for plotting, in the sense that the branches of a clustering tree will not cross. |
| `order.lab` | a vector similar to `order`, but containing observation labels instead of observation numbers. This component is only available if the original observations were labelled. |
| `height` | a vector with the diameters of the clusters prior to splitting. |
| `dc` | the divisive coefficient, measuring the clustering structure of the dataset. For each observation i, denote by *d(i)* the diameter of the last cluster to which it belongs (before being split off as a single observation), divided by the diameter of the whole dataset. The `dc` is the average of all *1 - d(i)*. It can also be seen as the average width (or the percentage filled) of the banner plot. Because `dc` grows with the number of observations, this measure should not be used to compare datasets of very different sizes. |
| `merge` | an (n-1) by 2 matrix, where n is the number of observations. Row i of `merge` describes the split at step n-i of the clustering. If a number *j* in row r is negative, then the single observation *|j|* is split off at stage n-r. If j is positive, then the cluster that will be splitted at stage n-j (described by row j), is split off at stage n-r. |
| `diss` | an object of class `"dissimilarity"`, representing the total dissimilarity matrix of the dataset. |
| `data` | a matrix containing the original or standardized measurements, depending on the `stand` option of the function `agnes`. If a dissimilarity matrix was given as input structure, then this component is not available. |
### See Also
`<agnes>` also for background and references; `[cutree](../../stats/html/cutree)` (and `[as.hclust](../../stats/html/as.hclust)`) for grouping extraction; `<daisy>`, `[dist](../../stats/html/dist)`, `<plot.diana>`, `<twins.object>`.
### Examples
```
data(votes.repub)
dv <- diana(votes.repub, metric = "manhattan", stand = TRUE)
print(dv)
plot(dv)
## Cut into 2 groups:
dv2 <- cutree(as.hclust(dv), k = 2)
table(dv2) # 8 and 42 group members
rownames(votes.repub)[dv2 == 1]
## For two groups, does the metric matter ?
dv0 <- diana(votes.repub, stand = TRUE) # default: Euclidean
dv.2 <- cutree(as.hclust(dv0), k = 2)
table(dv2 == dv.2)## identical group assignments
str(as.dendrogram(dv0)) # {via as.dendrogram.twins() method}
data(agriculture)
## Plot similar to Figure 8 in ref
## Not run: plot(diana(agriculture), ask = TRUE)
```
| programming_docs |
r None
`lower.to.upper.tri.inds` Permute Indices for Triangular Matrices
------------------------------------------------------------------
### Description
Compute index vectors for extracting or reordering of lower or upper triangular matrices that are stored as contiguous vectors.
### Usage
```
lower.to.upper.tri.inds(n)
upper.to.lower.tri.inds(n)
```
### Arguments
| | |
| --- | --- |
| `n` | integer larger than 1. |
### Value
integer vector containing a permutation of `1:N` where *N = n(n-1)/2*.
### See Also
`[upper.tri](../../base/html/lower.tri)`, `[lower.tri](../../base/html/lower.tri)` with a related purpose.
### Examples
```
m5 <- matrix(NA,5,5)
m <- m5; m[lower.tri(m)] <- upper.to.lower.tri.inds(5); m
m <- m5; m[upper.tri(m)] <- lower.to.upper.tri.inds(5); m
stopifnot(lower.to.upper.tri.inds(2) == 1,
lower.to.upper.tri.inds(3) == 1:3,
upper.to.lower.tri.inds(3) == 1:3,
sort(upper.to.lower.tri.inds(5)) == 1:10,
sort(lower.to.upper.tri.inds(6)) == 1:15)
```
r None
`agnes` Agglomerative Nesting (Hierarchical Clustering)
--------------------------------------------------------
### Description
Computes agglomerative hierarchical clustering of the dataset.
### Usage
```
agnes(x, diss = inherits(x, "dist"), metric = "euclidean",
stand = FALSE, method = "average", par.method,
keep.diss = n < 100, keep.data = !diss, trace.lev = 0)
```
### Arguments
| | |
| --- | --- |
| `x` | data matrix or data frame, or dissimilarity matrix, depending on the value of the `diss` argument. In case of a matrix or data frame, each row corresponds to an observation, and each column corresponds to a variable. All variables must be numeric. Missing values (NAs) are allowed. In case of a dissimilarity matrix, `x` is typically the output of `<daisy>` or `[dist](../../stats/html/dist)`. Also a vector with length n\*(n-1)/2 is allowed (where n is the number of observations), and will be interpreted in the same way as the output of the above-mentioned functions. Missing values (NAs) are not allowed. |
| `diss` | logical flag: if TRUE (default for `dist` or `dissimilarity` objects), then `x` is assumed to be a dissimilarity matrix. If FALSE, then `x` is treated as a matrix of observations by variables. |
| `metric` | character string specifying the metric to be used for calculating dissimilarities between observations. The currently available options are `"euclidean"` and `"manhattan"`. Euclidean distances are root sum-of-squares of differences, and manhattan distances are the sum of absolute differences. If `x` is already a dissimilarity matrix, then this argument will be ignored. |
| `stand` | logical flag: if TRUE, then the measurements in `x` are standardized before calculating the dissimilarities. Measurements are standardized for each variable (column), by subtracting the variable's mean value and dividing by the variable's mean absolute deviation. If `x` is already a dissimilarity matrix, then this argument will be ignored. |
| `method` | character string defining the clustering method. The six methods implemented are `"average"` ([unweighted pair-]group [arithMetic] average method, aka ‘UPGMA’), `"single"` (single linkage), `"complete"` (complete linkage), `"ward"` (Ward's method), `"weighted"` (weighted average linkage, aka ‘WPGMA’), its generalization `"flexible"` which uses (a constant version of) the Lance-Williams formula and the `par.method` argument, and `"gaverage"` a generalized `"average"` aka “flexible UPGMA” method also using the Lance-Williams formula and `par.method`. The default is `"average"`. |
| `par.method` | If `method` is `"flexible"` or `"gaverage"`, a numeric vector of length 1, 3, or 4, (with a default for `"gaverage"`), see in the details section. |
| `keep.diss, keep.data` | logicals indicating if the dissimilarities and/or input data `x` should be kept in the result. Setting these to `FALSE` can give much smaller results and hence even save memory allocation *time*. |
| `trace.lev` | integer specifying a trace level for printing diagnostics during the algorithm. Default `0` does not print anything; higher values print increasingly more. |
### Details
`agnes` is fully described in chapter 5 of Kaufman and Rousseeuw (1990). Compared to other agglomerative clustering methods such as `hclust`, `agnes` has the following features: (a) it yields the agglomerative coefficient (see `<agnes.object>`) which measures the amount of clustering structure found; and (b) apart from the usual tree it also provides the banner, a novel graphical display (see `<plot.agnes>`).
The `agnes`-algorithm constructs a hierarchy of clusterings.
At first, each observation is a small cluster by itself. Clusters are merged until only one large cluster remains which contains all the observations. At each stage the two *nearest* clusters are combined to form one larger cluster.
For `method="average"`, the distance between two clusters is the average of the dissimilarities between the points in one cluster and the points in the other cluster.
In `method="single"`, we use the smallest dissimilarity between a point in the first cluster and a point in the second cluster (nearest neighbor method).
When `method="complete"`, we use the largest dissimilarity between a point in the first cluster and a point in the second cluster (furthest neighbor method).
The `method = "flexible"` allows (and requires) more details: The Lance-Williams formula specifies how dissimilarities are computed when clusters are agglomerated (equation (32) in K&R(1990), p.237). If clusters *C\_1* and *C\_2* are agglomerated into a new cluster, the dissimilarity between their union and another cluster *Q* is given by
*D(C\_1 \cup C\_2, Q) = α\_1 \* D(C\_1, Q) + α\_2 \* D(C\_2, Q) + β \* D(C\_1,C\_2) + γ \* |D(C\_1, Q) - D(C\_2, Q)|,*
where the four coefficients *(α\_1, α\_2, β, γ)* are specified by the vector `par.method`, either directly as vector of length 4, or (more conveniently) if `par.method` is of length 1, say *= α*, `par.method` is extended to give the “Flexible Strategy” (K&R(1990), p.236 f) with Lance-Williams coefficients *(α\_1 = α\_2 = α, β = 1 - 2α, γ=0)*.
Also, if `length(par.method) == 3`, *γ = 0* is set.
**Care** and expertise is probably needed when using `method = "flexible"` particularly for the case when `par.method` is specified of longer length than one. Since cluster version 2.0, choices leading to invalid `merge` structures now signal an error (from the C code already). The *weighted average* (`method="weighted"`) is the same as `method="flexible", par.method = 0.5`. Further, `method= "single"` is equivalent to `method="flexible", par.method = c(.5,.5,0,-.5)`, and `method="complete"` is equivalent to `method="flexible", par.method = c(.5,.5,0,+.5)`.
The `method = "gaverage"` is a generalization of `"average"`, aka “flexible UPGMA” method, and is (a generalization of the approach) detailed in Belbin et al. (1992). As `"flexible"`, it uses the Lance-Williams formula above for dissimilarity updating, but with *α\_1* and *α\_2* not constant, but *proportional* to the *sizes* *n\_1* and *n\_2* of the clusters *C\_1* and *C\_2* respectively, i.e,
*α\_j = α'\_j \* n\_1/(n\_1 + n\_2),*
where *α'\_1*, *α'\_2* are determined from `par.method`, either directly as *(α\_1, α\_2, β, γ)* or *(α\_1, α\_2, β)* with *γ = 0*, or (less flexibly, but more conveniently) as follows:
Belbin et al proposed “flexible beta”, i.e. the user would only specify *β* (as `par.method`), sensibly in
*-1 ≤ β < 1,*
and *β* determines *α'\_1* and *α'\_2* as
*α'\_j = 1 - β,*
and *γ = 0*.
This *β* may be specified by `par.method` (as length 1 vector), and if `par.method` is not specified, a default value of -0.1 is used, as Belbin et al recommend taking a *β* value around -0.1 as a general agglomerative hierarchical clustering strategy.
Note that `method = "gaverage", par.method = 0` (or `par.method =
c(1,1,0,0)`) is equivalent to the `agnes()` default method `"average"`.
### Value
an object of class `"agnes"` (which extends `"twins"`) representing the clustering. See `<agnes.object>` for details, and methods applicable.
### BACKGROUND
Cluster analysis divides a dataset into groups (clusters) of observations that are similar to each other.
Hierarchical methods
like `agnes`, `<diana>`, and `<mona>` construct a hierarchy of clusterings, with the number of clusters ranging from one to the number of observations.
Partitioning methods
like `<pam>`, `<clara>`, and `<fanny>` require that the number of clusters be given by the user.
### Author(s)
Method `"gaverage"` has been contributed by Pierre Roudier, Landcare Research, New Zealand.
### References
Kaufman, L. and Rousseeuw, P.J. (1990). (=: “K&R(1990)”) *Finding Groups in Data: An Introduction to Cluster Analysis*. Wiley, New York.
Anja Struyf, Mia Hubert and Peter J. Rousseeuw (1996) Clustering in an Object-Oriented Environment. *Journal of Statistical Software* **1**. doi: [10.18637/jss.v001.i04](https://doi.org/10.18637/jss.v001.i04)
Struyf, A., Hubert, M. and Rousseeuw, P.J. (1997). Integrating Robust Clustering Techniques in S-PLUS, *Computational Statistics and Data Analysis*, **26**, 17–37.
Lance, G.N., and W.T. Williams (1966). A General Theory of Classifactory Sorting Strategies, I. Hierarchical Systems. *Computer J.* **9**, 373–380.
Belbin, L., Faith, D.P. and Milligan, G.W. (1992). A Comparison of Two Approaches to Beta-Flexible Clustering. *Multivariate Behavioral Research*, **27**, 417–433.
### See Also
`<agnes.object>`, `<daisy>`, `<diana>`, `[dist](../../stats/html/dist)`, `[hclust](../../stats/html/hclust)`, `<plot.agnes>`, `<twins.object>`.
### Examples
```
data(votes.repub)
agn1 <- agnes(votes.repub, metric = "manhattan", stand = TRUE)
agn1
plot(agn1)
op <- par(mfrow=c(2,2))
agn2 <- agnes(daisy(votes.repub), diss = TRUE, method = "complete")
plot(agn2)
## alpha = 0.625 ==> beta = -1/4 is "recommended" by some
agnS <- agnes(votes.repub, method = "flexible", par.meth = 0.625)
plot(agnS)
par(op)
## "show" equivalence of three "flexible" special cases
d.vr <- daisy(votes.repub)
a.wgt <- agnes(d.vr, method = "weighted")
a.sing <- agnes(d.vr, method = "single")
a.comp <- agnes(d.vr, method = "complete")
iC <- -(6:7) # not using 'call' and 'method' for comparisons
stopifnot(
all.equal(a.wgt [iC], agnes(d.vr, method="flexible", par.method = 0.5)[iC]) ,
all.equal(a.sing[iC], agnes(d.vr, method="flex", par.method= c(.5,.5,0, -.5))[iC]),
all.equal(a.comp[iC], agnes(d.vr, method="flex", par.method= c(.5,.5,0, +.5))[iC]))
## Exploring the dendrogram structure
(d2 <- as.dendrogram(agn2)) # two main branches
d2[[1]] # the first branch
d2[[2]] # the 2nd one { 8 + 42 = 50 }
d2[[1]][[1]]# first sub-branch of branch 1 .. and shorter form
identical(d2[[c(1,1)]],
d2[[1]][[1]])
## a "textual picture" of the dendrogram :
str(d2)
data(agriculture)
## Plot similar to Figure 7 in ref
## Not run: plot(agnes(agriculture), ask = TRUE)
data(animals)
aa.a <- agnes(animals) # default method = "average"
aa.ga <- agnes(animals, method = "gaverage")
op <- par(mfcol=1:2, mgp=c(1.5, 0.6, 0), mar=c(.1+ c(4,3,2,1)),
cex.main=0.8)
plot(aa.a, which.plot = 2)
plot(aa.ga, which.plot = 2)
par(op)
## Show how "gaverage" is a "generalized average":
aa.ga.0 <- agnes(animals, method = "gaverage", par.method = 0)
stopifnot(all.equal(aa.ga.0[iC], aa.a[iC]))
```
r None
`fanny` Fuzzy Analysis Clustering
----------------------------------
### Description
Computes a fuzzy clustering of the data into `k` clusters.
### Usage
```
fanny(x, k, diss = inherits(x, "dist"), memb.exp = 2,
metric = c("euclidean", "manhattan", "SqEuclidean"),
stand = FALSE, iniMem.p = NULL, cluster.only = FALSE,
keep.diss = !diss && !cluster.only && n < 100,
keep.data = !diss && !cluster.only,
maxit = 500, tol = 1e-15, trace.lev = 0)
```
### Arguments
| | |
| --- | --- |
| `x` | data matrix or data frame, or dissimilarity matrix, depending on the value of the `diss` argument. In case of a matrix or data frame, each row corresponds to an observation, and each column corresponds to a variable. All variables must be numeric. Missing values (NAs) are allowed. In case of a dissimilarity matrix, `x` is typically the output of `<daisy>` or `[dist](../../stats/html/dist)`. Also a vector of length n\*(n-1)/2 is allowed (where n is the number of observations), and will be interpreted in the same way as the output of the above-mentioned functions. Missing values (NAs) are not allowed. |
| `k` | integer giving the desired number of clusters. It is required that *0 < k < n/2* where *n* is the number of observations. |
| `diss` | logical flag: if TRUE (default for `dist` or `dissimilarity` objects), then `x` is assumed to be a dissimilarity matrix. If FALSE, then `x` is treated as a matrix of observations by variables. |
| `memb.exp` | number *r* strictly larger than 1 specifying the *membership exponent* used in the fit criterion; see the ‘Details’ below. Default: `2` which used to be hardwired inside FANNY. |
| `metric` | character string specifying the metric to be used for calculating dissimilarities between observations. Options are `"euclidean"` (default), `"manhattan"`, and `"SqEuclidean"`. Euclidean distances are root sum-of-squares of differences, and manhattan distances are the sum of absolute differences, and `"SqEuclidean"`, the *squared* euclidean distances are sum-of-squares of differences. Using this last option is equivalent (but somewhat slower) to computing so called “fuzzy C-means”. If `x` is already a dissimilarity matrix, then this argument will be ignored. |
| `stand` | logical; if true, the measurements in `x` are standardized before calculating the dissimilarities. Measurements are standardized for each variable (column), by subtracting the variable's mean value and dividing by the variable's mean absolute deviation. If `x` is already a dissimilarity matrix, then this argument will be ignored. |
| `iniMem.p` | numeric *n x k* matrix or `NULL` (by default); can be used to specify a starting `membership` matrix, i.e., a matrix of non-negative numbers, each row summing to one. |
| | |
| --- | --- |
| `cluster.only` | logical; if true, no silhouette information will be computed and returned, see details. |
| | |
| --- | --- |
| `keep.diss, keep.data` | logicals indicating if the dissimilarities and/or input data `x` should be kept in the result. Setting these to `FALSE` can give smaller results and hence also save memory allocation *time*. |
| `maxit, tol` | maximal number of iterations and default tolerance for convergence (relative convergence of the fit criterion) for the FANNY algorithm. The defaults `maxit = 500` and `tol =
1e-15` used to be hardwired inside the algorithm. |
| `trace.lev` | integer specifying a trace level for printing diagnostics during the C-internal algorithm. Default `0` does not print anything; higher values print increasingly more. |
### Details
In a fuzzy clustering, each observation is “spread out” over the various clusters. Denote by *u(i,v)* the membership of observation *i* to cluster *v*.
The memberships are nonnegative, and for a fixed observation i they sum to 1. The particular method `fanny` stems from chapter 4 of Kaufman and Rousseeuw (1990) (see the references in `<daisy>`) and has been extended by Martin Maechler to allow user specified `memb.exp`, `iniMem.p`, `maxit`, `tol`, etc.
Fanny aims to minimize the objective function
*SUM\_[v=1..k] (SUM\_(i,j) u(i,v)^r u(j,v)^r d(i,j)) / (2 SUM\_j u(j,v)^r)*
where *n* is the number of observations, *k* is the number of clusters, *r* is the membership exponent `memb.exp` and *d(i,j)* is the dissimilarity between observations *i* and *j*.
Note that *r -> 1* gives increasingly crisper clusterings whereas *r -> Inf* leads to complete fuzzyness. K&R(1990), p.191 note that values too close to 1 can lead to slow convergence. Further note that even the default, *r = 2* can lead to complete fuzzyness, i.e., memberships *u(i,v) == 1/k*. In that case a warning is signalled and the user is advised to chose a smaller `memb.exp` (*=r*).
Compared to other fuzzy clustering methods, `fanny` has the following features: (a) it also accepts a dissimilarity matrix; (b) it is more robust to the `spherical cluster` assumption; (c) it provides a novel graphical display, the silhouette plot (see `<plot.partition>`).
### Value
an object of class `"fanny"` representing the clustering. See `<fanny.object>` for details.
### See Also
`<agnes>` for background and references; `<fanny.object>`, `<partition.object>`, `<plot.partition>`, `<daisy>`, `[dist](../../stats/html/dist)`.
### Examples
```
## generate 10+15 objects in two clusters, plus 3 objects lying
## between those clusters.
x <- rbind(cbind(rnorm(10, 0, 0.5), rnorm(10, 0, 0.5)),
cbind(rnorm(15, 5, 0.5), rnorm(15, 5, 0.5)),
cbind(rnorm( 3,3.2,0.5), rnorm( 3,3.2,0.5)))
fannyx <- fanny(x, 2)
## Note that observations 26:28 are "fuzzy" (closer to # 2):
fannyx
summary(fannyx)
plot(fannyx)
(fan.x.15 <- fanny(x, 2, memb.exp = 1.5)) # 'crispier' for obs. 26:28
(fanny(x, 2, memb.exp = 3)) # more fuzzy in general
data(ruspini)
f4 <- fanny(ruspini, 4)
stopifnot(rle(f4$clustering)$lengths == c(20,23,17,15))
plot(f4, which = 1)
## Plot similar to Figure 6 in Stryuf et al (1996)
plot(fanny(ruspini, 5))
```
r None
`summary.diana` Summary Method for 'diana' Objects
---------------------------------------------------
### Description
Returns (and prints) a summary list for a `diana` object.
### Usage
```
## S3 method for class 'diana'
summary(object, ...)
## S3 method for class 'summary.diana'
print(x, ...)
```
### Arguments
| | |
| --- | --- |
| `x, object` | a `<diana>` object. |
| `...` | potential further arguments (require by generic). |
### See Also
`<diana>`, `[diana.object](diana)`.
r None
`bannerplot` Plot Banner (of Hierarchical Clustering)
------------------------------------------------------
### Description
Draws a “banner”, i.e. basically a horizontal `[barplot](../../graphics/html/barplot)` visualizing the (agglomerative or divisive) hierarchical clustering or an other binary dendrogram structure.
### Usage
```
bannerplot(x, w = rev(x$height), fromLeft = TRUE,
main=NULL, sub=NULL, xlab = "Height", adj = 0,
col = c(2, 0), border = 0, axes = TRUE, frame.plot = axes,
rev.xax = !fromLeft, xax.pretty = TRUE,
labels = NULL, nmax.lab = 35, max.strlen = 5,
yax.do = axes && length(x$order) <= nmax.lab,
yaxRight = fromLeft, y.mar = 2.4 + max.strlen/2.5, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | a list with components `order`, `order.lab` and `height` when `w`, the next argument is not specified. |
| `w` | non-negative numeric vector of bar widths. |
| `fromLeft` | logical, indicating if the banner is from the left or not. |
| `main,sub` | main and sub titles, see `[title](../../graphics/html/title)`. |
| `xlab` | x axis label (with ‘correct’ default e.g. for `plot.agnes`). |
| `adj` | passed to `[title](../../graphics/html/title)(main,sub)` for string adjustment. |
| `col` | vector of length 2, for two horizontal segments. |
| `border` | color for bar border; now defaults to background (no border). |
| `axes` | logical indicating if axes (and labels) should be drawn at all. |
| `frame.plot` | logical indicating the banner should be framed; mainly used when `border = 0` (as per default). |
| `rev.xax` | logical indicating if the x axis should be reversed (as in `plot.diana`). |
| `xax.pretty` | logical or integer indicating if `[pretty](../../base/html/pretty)()` should be used for the x axis. `xax.pretty = FALSE` is mainly for back compatibility. |
| `labels` | labels to use on y-axis; the default is constructed from `x`. |
| `nmax.lab` | integer indicating the number of labels which is considered too large for single-name labelling the banner plot. |
| `max.strlen` | positive integer giving the length to which strings are truncated in banner plot labeling. |
| `yax.do` | logical indicating if a y axis and banner labels should be drawn. |
| `yaxRight` | logical indicating if the y axis is on the right or left. |
| `y.mar` | positive number specifying the margin width to use when banners are labeled (along a y-axis). The default adapts to the string width and optimally would also dependend on the font. |
| `...` | graphical parameters (see `[par](../../graphics/html/par)`) may also be supplied as arguments to this function. |
### Note
This is mainly a utility called from `<plot.agnes>`, `<plot.diana>` and `<plot.mona>`.
### Author(s)
Martin Maechler (from original code of Kaufman and Rousseeuw).
### Examples
```
data(agriculture)
bannerplot(agnes(agriculture), main = "Bannerplot")
```
| programming_docs |
r None
`twins.object` Hierarchical Clustering Object
----------------------------------------------
### Description
The objects of class `"twins"` represent an agglomerative or divisive (polythetic) hierarchical clustering of a dataset.
### Value
See `<agnes.object>` and `[diana.object](diana)` for details.
### GENERATION
This class of objects is returned from `agnes` or `diana`.
### METHODS
The `"twins"` class has a method for the following generic function: `pltree`.
### INHERITANCE
The following classes inherit from class `"twins"` : `"agnes"` and `"diana"`.
### See Also
`<agnes>`,`<diana>`.
r None
`silhouette` Compute or Extract Silhouette Information from Clustering
-----------------------------------------------------------------------
### Description
Compute silhouette information according to a given clustering in *k* clusters.
### Usage
```
silhouette(x, ...)
## Default S3 method:
silhouette(x, dist, dmatrix, ...)
## S3 method for class 'partition'
silhouette(x, ...)
## S3 method for class 'clara'
silhouette(x, full = FALSE, subset = NULL, ...)
sortSilhouette(object, ...)
## S3 method for class 'silhouette'
summary(object, FUN = mean, ...)
## S3 method for class 'silhouette'
plot(x, nmax.lab = 40, max.strlen = 5,
main = NULL, sub = NULL, xlab = expression("Silhouette width "* s[i]),
col = "gray", do.col.sort = length(col) > 1, border = 0,
cex.names = par("cex.axis"), do.n.k = TRUE, do.clus.stat = TRUE, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | an object of appropriate class; for the `default` method an integer vector with *k* different integer cluster codes or a list with such an `x$clustering` component. Note that silhouette statistics are only defined if *2 <= k <= n-1*. |
| `dist` | a dissimilarity object inheriting from class `[dist](../../stats/html/dist)` or coercible to one. If not specified, `dmatrix` must be. |
| `dmatrix` | a symmetric dissimilarity matrix (*n x n*), specified instead of `dist`, which can be more efficient. |
| `full` | logical or number in *[0,1]* specifying if a *full* silhouette should be computed for `<clara>` object. When a number, say *f*, for a random `[sample.int](../../base/html/sample)(n, size = f*n)` of the data the silhouette values are computed. This requires *O((f\*n)^2)* memory, since the full dissimilarity of the (sub)sample (see `<daisy>`) is needed internally. |
| `subset` | a subset from `1:n`, specified instead of `full` to specify the indices of the observations to be used for the silhouette computations. |
| `object` | an object of class `silhouette`. |
| `...` | further arguments passed to and from methods. |
| `FUN` | function used to summarize silhouette widths. |
| `nmax.lab` | integer indicating the number of labels which is considered too large for single-name labeling the silhouette plot. |
| `max.strlen` | positive integer giving the length to which strings are truncated in silhouette plot labeling. |
| `main, sub, xlab` | arguments to `[title](../../graphics/html/title)`; have a sensible non-NULL default here. |
| `col, border, cex.names` | arguments passed `[barplot](../../graphics/html/barplot)()`; note that the default used to be `col
= heat.colors(n), border = par("fg")` instead. `col` can also be a color vector of length *k* for clusterwise coloring, see also `do.col.sort`: |
| `do.col.sort` | logical indicating if the colors `col` should be sorted “along” the silhouette; this is useful for casewise or clusterwise coloring. |
| `do.n.k` | logical indicating if *n* and *k* “title text” should be written. |
| `do.clus.stat` | logical indicating if cluster size and averages should be written right to the silhouettes. |
### Details
For each observation i, the *silhouette width* *s(i)* is defined as follows:
Put a(i) = average dissimilarity between i and all other points of the cluster to which i belongs (if i is the *only* observation in its cluster, *s(i) := 0* without further calculations). For all *other* clusters C, put *d(i,C)* = average dissimilarity of i to all observations of C. The smallest of these *d(i,C)* is *b(i) := \min\_C d(i,C)*, and can be seen as the dissimilarity between i and its “neighbor” cluster, i.e., the nearest one to which it does *not* belong. Finally,
*s(i) := ( b(i) - a(i) ) / max( a(i), b(i) ).*
`silhouette.default()` is now based on C code donated by Romain Francois (the R version being still available as `cluster:::silhouette.default.R`).
Observations with a large *s(i)* (almost 1) are very well clustered, a small *s(i)* (around 0) means that the observation lies between two clusters, and observations with a negative *s(i)* are probably placed in the wrong cluster.
### Value
`silhouette()` returns an object, `sil`, of class `silhouette` which is an *n x 3* matrix with attributes. For each observation i, `sil[i,]` contains the cluster to which i belongs as well as the neighbor cluster of i (the cluster, not containing i, for which the average dissimilarity between its observations and i is minimal), and the silhouette width *s(i)* of the observation. The `[colnames](../../base/html/colnames)` correspondingly are `c("cluster", "neighbor", "sil_width")`.
`summary(sil)` returns an object of class `summary.silhouette`, a list with components
`si.summary`:
numerical `[summary](../../base/html/summary)` of the individual silhouette widths *s(i)*.
`clus.avg.widths`:
numeric (rank 1) array of clusterwise *means* of silhouette widths where `mean = FUN` is used.
`avg.width`:
the total mean `FUN(s)` where `s` are the individual silhouette widths.
`clus.sizes`:
`[table](../../base/html/table)` of the *k* cluster sizes.
`call`:
if available, the `[call](../../base/html/call)` creating `sil`.
`Ordered`:
logical identical to `attr(sil, "Ordered")`, see below.
`sortSilhouette(sil)` orders the rows of `sil` as in the silhouette plot, by cluster (increasingly) and decreasing silhouette width *s(i)*.
`attr(sil, "Ordered")` is a logical indicating if `sil` *is* ordered as by `sortSilhouette()`. In that case, `rownames(sil)` will contain case labels or numbers, and
`attr(sil, "iOrd")` the ordering index vector.
### Note
While `silhouette()` is *intrinsic* to the `[partition](partition.object)` clusterings, and hence has a (trivial) method for these, it is straightforward to get silhouettes from hierarchical clusterings from `silhouette.default()` with `[cutree](../../stats/html/cutree)()` and distance as input.
By default, for `<clara>()` partitions, the silhouette is just for the best random *subset* used. Use `full = TRUE` to compute (and later possibly plot) the full silhouette.
### References
Rousseeuw, P.J. (1987) Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. *J. Comput. Appl. Math.*, **20**, 53–65.
chapter 2 of Kaufman and Rousseeuw (1990), see the references in `<plot.agnes>`.
### See Also
`<partition.object>`, `<plot.partition>`.
### Examples
```
data(ruspini)
pr4 <- pam(ruspini, 4)
str(si <- silhouette(pr4))
(ssi <- summary(si))
plot(si) # silhouette plot
plot(si, col = c("red", "green", "blue", "purple"))# with cluster-wise coloring
si2 <- silhouette(pr4$clustering, dist(ruspini, "canberra"))
summary(si2) # has small values: "canberra"'s fault
plot(si2, nmax= 80, cex.names=0.6)
op <- par(mfrow= c(3,2), oma= c(0,0, 3, 0),
mgp= c(1.6,.8,0), mar= .1+c(4,2,2,2))
for(k in 2:6)
plot(silhouette(pam(ruspini, k=k)), main = paste("k = ",k), do.n.k=FALSE)
mtext("PAM(Ruspini) as in Kaufman & Rousseeuw, p.101",
outer = TRUE, font = par("font.main"), cex = par("cex.main")); frame()
## the same with cluster-wise colours:
c6 <- c("tomato", "forest green", "dark blue", "purple2", "goldenrod4", "gray20")
for(k in 2:6)
plot(silhouette(pam(ruspini, k=k)), main = paste("k = ",k), do.n.k=FALSE,
col = c6[1:k])
par(op)
## clara(): standard silhouette is just for the best random subset
data(xclara)
set.seed(7)
str(xc1k <- xclara[ sample(nrow(xclara), size = 1000) ,]) # rownames == indices
cl3 <- clara(xc1k, 3)
plot(silhouette(cl3))# only of the "best" subset of 46
## The full silhouette: internally needs large (36 MB) dist object:
sf <- silhouette(cl3, full = TRUE) ## this is the same as
s.full <- silhouette(cl3$clustering, daisy(xc1k))
stopifnot(all.equal(sf, s.full, check.attributes = FALSE, tolerance = 0))
## color dependent on original "3 groups of each 1000": % __FIXME ??__
plot(sf, col = 2+ as.integer(names(cl3$clustering) ) %/% 1000,
main ="plot(silhouette(clara(.), full = TRUE))")
## Silhouette for a hierarchical clustering:
ar <- agnes(ruspini)
si3 <- silhouette(cutree(ar, k = 5), # k = 4 gave the same as pam() above
daisy(ruspini))
plot(si3, nmax = 80, cex.names = 0.5)
## 2 groups: Agnes() wasn't too good:
si4 <- silhouette(cutree(ar, k = 2), daisy(ruspini))
plot(si4, nmax = 80, cex.names = 0.5)
```
r None
`agriculture` European Union Agricultural Workforces
-----------------------------------------------------
### Description
Gross National Product (GNP) per capita and percentage of the population working in agriculture for each country belonging to the European Union in 1993.
### Usage
```
data(agriculture)
```
### Format
A data frame with 12 observations on 2 variables:
| | | | |
| --- | --- | --- | --- |
| [ , 1] | `x` | numeric | per capita GNP |
| [ , 2] | `y` | numeric | percentage in agriculture |
The row names of the data frame indicate the countries.
### Details
The data seem to show two clusters, the “more agricultural” one consisting of Greece, Portugal, Spain, and Ireland.
### Source
Eurostat (European Statistical Agency, 1994): *Cijfers en feiten: Een statistisch portret van de Europese Unie*.
### References
see those in `<agnes>`.
### See Also
`<agnes>`, `<daisy>`, `<diana>`.
### Examples
```
data(agriculture)
## Compute the dissimilarities using Euclidean metric and without
## standardization
daisy(agriculture, metric = "euclidean", stand = FALSE)
## 2nd plot is similar to Figure 3 in Struyf et al (1996)
plot(pam(agriculture, 2))
## Plot similar to Figure 7 in Struyf et al (1996)
## Not run: plot(agnes(agriculture), ask = TRUE)
## Plot similar to Figure 8 in Struyf et al (1996)
## Not run: plot(diana(agriculture), ask = TRUE)
```
r None
`flower` Flower Characteristics
--------------------------------
### Description
8 characteristics for 18 popular flowers.
### Usage
```
data(flower)
```
### Format
A data frame with 18 observations on 8 variables:
| | | |
| --- | --- | --- |
| [ , "V1"] | factor | winters |
| [ , "V2"] | factor | shadow |
| [ , "V3"] | factor | tubers |
| [ , "V4"] | factor | color |
| [ , "V5"] | ordered | soil |
| [ , "V6"] | ordered | preference |
| [ , "V7"] | numeric | height |
| [ , "V8"] | numeric | distance |
V1
winters, is binary and indicates whether the plant may be left in the garden when it freezes.
V2
shadow, is binary and shows whether the plant needs to stand in the shadow.
V3
tubers, is asymmetric binary and distinguishes between plants with tubers and plants that grow in any other way.
V4
color, is nominal and specifies the flower's color (1 = white, 2 = yellow, 3 = pink, 4 = red, 5 = blue).
V5
soil, is ordinal and indicates whether the plant grows in dry (1), normal (2), or wet (3) soil.
V6
preference, is ordinal and gives someone's preference ranking going from 1 to 18.
V7
height, is interval scaled, the plant's height in centimeters.
V8
distance, is interval scaled, the distance in centimeters that should be left between the plants.
### References
Struyf, Hubert and Rousseeuw (1996), see `<agnes>`.
### Examples
```
data(flower)
## Example 2 in ref
daisy(flower, type = list(asymm = 3))
daisy(flower, type = list(asymm = c(1, 3), ordratio = 7))
```
r None
`summary.agnes` Summary Method for 'agnes' Objects
---------------------------------------------------
### Description
Returns (and prints) a summary list for an `agnes` object. Printing gives more output than the corresponding `<print.agnes>` method.
### Usage
```
## S3 method for class 'agnes'
summary(object, ...)
## S3 method for class 'summary.agnes'
print(x, ...)
```
### Arguments
| | |
| --- | --- |
| `x, object` | a `<agnes>` object. |
| `...` | potential further arguments (require by generic). |
### See Also
`<agnes>`, `<agnes.object>`.
### Examples
```
data(agriculture)
summary(agnes(agriculture))
```
r None
`dissimilarity.object` Dissimilarity Matrix Object
---------------------------------------------------
### Description
Objects of class `"dissimilarity"` representing the dissimilarity matrix of a dataset.
### Value
The dissimilarity matrix is symmetric, and hence its lower triangle (column wise) is represented as a vector to save storage space. If the object, is called `do`, and `n` the number of observations, i.e., `n <- attr(do, "Size")`, then for *i < j <= n*, the dissimilarity between (row) i and j is `do[n*(i-1) - i*(i-1)/2 + j-i]`. The length of the vector is *n\*(n-1)/2*, i.e., of order *n^2*.
`"dissimilarity"` objects also inherit from class `[dist](../../stats/html/dist)` and can use `dist` methods, in particular, `[as.matrix](../../base/html/matrix)`, such that *d(i,j)* from above is just `as.matrix(do)[i,j]`.
The object has the following attributes:
| | |
| --- | --- |
| `Size` | the number of observations in the dataset. |
| `Metric` | the metric used for calculating the dissimilarities. Possible values are "euclidean", "manhattan", "mixed" (if variables of different types were present in the dataset), and "unspecified". |
| `Labels` | optionally, contains the labels, if any, of the observations of the dataset. |
| `NA.message` | optionally, if a dissimilarity could not be computed, because of too many missing values for some observations of the dataset. |
| `Types` | when a mixed metric was used, the types for each variable as one-letter codes (as in the book, e.g. p.54): A
Asymmetric binary S
Symmetric binary N
Nominal (factor) O
Ordinal (ordered factor) I
Interval scaled (numeric) T
raTio to be log transformed (positive numeric) . |
### GENERATION
`<daisy>` returns this class of objects. Also the functions `pam`, `clara`, `fanny`, `agnes`, and `diana` return a `dissimilarity` object, as one component of their return objects.
### METHODS
The `"dissimilarity"` class has methods for the following generic functions: `print`, `summary`.
### See Also
`<daisy>`, `[dist](../../stats/html/dist)`, `<pam>`, `<clara>`, `<fanny>`, `<agnes>`, `<diana>`.
r None
`daisy` Dissimilarity Matrix Calculation
-----------------------------------------
### Description
Compute all the pairwise dissimilarities (distances) between observations in the data set. The original variables may be of mixed types. In that case, or whenever `metric = "gower"` is set, a generalization of Gower's formula is used, see ‘Details’ below.
### Usage
```
daisy(x, metric = c("euclidean", "manhattan", "gower"),
stand = FALSE, type = list(), weights = rep.int(1, p),
warnBin = warnType, warnAsym = warnType, warnConst = warnType,
warnType = TRUE)
```
### Arguments
| | |
| --- | --- |
| `x` | numeric matrix or data frame, of dimension *n x p*, say. Dissimilarities will be computed between the rows of `x`. Columns of mode `numeric` (i.e. all columns when `x` is a matrix) will be recognized as interval scaled variables, columns of class `factor` will be recognized as nominal variables, and columns of class `ordered` will be recognized as ordinal variables. Other variable types should be specified with the `type` argument. Missing values (`[NA](../../base/html/na)`s) are allowed. |
| `metric` | character string specifying the metric to be used. The currently available options are `"euclidean"` (the default), `"manhattan"` and `"gower"`. Euclidean distances are root sum-of-squares of differences, and manhattan distances are the sum of absolute differences. “Gower's distance” is chosen by metric `"gower"` or automatically if some columns of `x` are not numeric. Also known as Gower's coefficient (1971), expressed as a dissimilarity, this implies that a particular standardisation will be applied to each variable, and the “distance” between two units is the sum of all the variable-specific distances, see the details section. |
| `stand` | logical flag: if TRUE, then the measurements in `x` are standardized before calculating the dissimilarities. Measurements are standardized for each variable (column), by subtracting the variable's mean value and dividing by the variable's mean absolute deviation. If not all columns of `x` are numeric, `stand` will be ignored and Gower's standardization (based on the `[range](../../base/html/range)`) will be applied in any case, see argument `metric`, above, and the details section. |
| `type` | list for specifying some (or all) of the types of the variables (columns) in `x`. The list may contain the following components: `"ordratio"` (ratio scaled variables to be treated as ordinal variables), `"logratio"` (ratio scaled variables that must be logarithmically transformed), `"asymm"` (asymmetric binary) and `"symm"` (symmetric binary variables). Each component's value is a vector, containing the names or the numbers of the corresponding columns of `x`. Variables not mentioned in the `type` list are interpreted as usual (see argument `x`). |
| `weights` | an optional numeric vector of length *p*(=`ncol(x)`); to be used in “case 2” (mixed variables, or `metric = "gower"`), specifying a weight for each variable (`x[,k]`) instead of *1* in Gower's original formula. |
| `warnBin, warnAsym, warnConst` | logicals indicating if the corresponding type checking warnings should be signalled (when found). |
| `warnType` | logical indicating if *all* the type checking warnings should be active or not. |
### Details
The original version of `daisy` is fully described in chapter 1 of Kaufman and Rousseeuw (1990). Compared to `[dist](../../stats/html/dist)` whose input must be numeric variables, the main feature of `daisy` is its ability to handle other variable types as well (e.g. nominal, ordinal, (a)symmetric binary) even when different types occur in the same data set.
The handling of nominal, ordinal, and (a)symmetric binary data is achieved by using the general dissimilarity coefficient of Gower (1971). If `x` contains any columns of these data-types, both arguments `metric` and `stand` will be ignored and Gower's coefficient will be used as the metric. This can also be activated for purely numeric data by `metric = "gower"`. With that, each variable (column) is first standardized by dividing each entry by the range of the corresponding variable, after subtracting the minimum value; consequently the rescaled variable has range *[0,1]*, exactly.
Note that setting the type to `symm` (symmetric binary) gives the same dissimilarities as using *nominal* (which is chosen for non-ordered factors) only when no missing values are present, and more efficiently.
Note that `daisy` signals a warning when 2-valued numerical variables do not have an explicit `type` specified, because the reference authors recommend to consider using `"asymm"`; the warning may be silenced by `warnBin = FALSE`.
In the `daisy` algorithm, missing values in a row of x are not included in the dissimilarities involving that row. There are two main cases,
1. If all variables are interval scaled (and `metric` is *not* `"gower"`), the metric is "euclidean", and *n\_g* is the number of columns in which neither row i and j have NAs, then the dissimilarity d(i,j) returned is *sqrt(p/n\_g)* (*p=*ncol(x)) times the Euclidean distance between the two vectors of length *n\_g* shortened to exclude NAs. The rule is similar for the "manhattan" metric, except that the coefficient is *p/n\_g*. If *n\_g = 0*, the dissimilarity is NA.
2. When some variables have a type other than interval scaled, or if `metric = "gower"` is specified, the dissimilarity between two rows is the weighted mean of the contributions of each variable. Specifically,
*d\_ij = d(i,j) = sum(k=1:p; w\_k delta(ij;k) d(ij,k)) / sum(k=1:p; w\_k delta(ij;k)).*
In other words, *d\_ij* is a weighted mean of *d(ij,k)* with weights *w\_k delta(ij;k)*, where *w\_k*`= weigths[k]`, *delta(ij;k)* is 0 or 1, and *d(ij,k)*, the k-th variable contribution to the total distance, is a distance between `x[i,k]` and `x[j,k]`, see below.
The 0-1 weight *delta(ij;k)* becomes zero when the variable `x[,k]` is missing in either or both rows (i and j), or when the variable is asymmetric binary and both values are zero. In all other situations it is 1.
The contribution *d(ij,k)* of a nominal or binary variable to the total dissimilarity is 0 if both values are equal, 1 otherwise. The contribution of other variables is the absolute difference of both values, divided by the total range of that variable. Note that “standard scoring” is applied to ordinal variables, i.e., they are replaced by their integer codes `1:K`. Note that this is not the same as using their ranks (since there typically are ties).
As the individual contributions *d(ij,k)* are in *[0,1]*, the dissimilarity *d\_ij* will remain in this range. If all weights *w\_k delta(ij;k)* are zero, the dissimilarity is set to `[NA](../../base/html/na)`.
### Value
an object of class `"dissimilarity"` containing the dissimilarities among the rows of `x`. This is typically the input for the functions `pam`, `fanny`, `agnes` or `diana`. For more details, see `<dissimilarity.object>`.
### Background
Dissimilarities are used as inputs to cluster analysis and multidimensional scaling. The choice of metric may have a large impact.
### Author(s)
Anja Struyf, Mia Hubert, and Peter and Rousseeuw, for the original version.
Martin Maechler improved the `[NA](../../base/html/na)` handling and `type` specification checking, and extended functionality to `metric = "gower"` and the optional `weights` argument.
### References
Gower, J. C. (1971) A general coefficient of similarity and some of its properties, *Biometrics* **27**, 857–874.
Kaufman, L. and Rousseeuw, P.J. (1990) *Finding Groups in Data: An Introduction to Cluster Analysis*. Wiley, New York.
Struyf, A., Hubert, M. and Rousseeuw, P.J. (1997) Integrating Robust Clustering Techniques in S-PLUS, *Computational Statistics and Data Analysis* **26**, 17–37.
### See Also
`<dissimilarity.object>`, `[dist](../../stats/html/dist)`, `<pam>`, `<fanny>`, `<clara>`, `<agnes>`, `<diana>`.
### Examples
```
data(agriculture)
## Example 1 in ref:
## Dissimilarities using Euclidean metric and without standardization
d.agr <- daisy(agriculture, metric = "euclidean", stand = FALSE)
d.agr
as.matrix(d.agr)[,"DK"] # via as.matrix.dist(.)
## compare with
as.matrix(daisy(agriculture, metric = "gower"))
data(flower)
## Example 2 in ref
summary(dfl1 <- daisy(flower, type = list(asymm = 3)))
summary(dfl2 <- daisy(flower, type = list(asymm = c(1, 3), ordratio = 7)))
## this failed earlier:
summary(dfl3 <- daisy(flower,
type = list(asymm = c("V1", "V3"), symm= 2,
ordratio= 7, logratio= 8)))
```
| programming_docs |
r None
`coef.hclust` Agglomerative / Divisive Coefficient for 'hclust' Objects
------------------------------------------------------------------------
### Description
Computes the “agglomerative coefficient” (aka “divisive coefficient” for `<diana>`), measuring the clustering structure of the dataset.
For each observation i, denote by *m(i)* its dissimilarity to the first cluster it is merged with, divided by the dissimilarity of the merger in the final step of the algorithm. The agglomerative coefficient is the average of all *1 - m(i)*. It can also be seen as the average width (or the percentage filled) of the banner plot.
`coefHier()` directly interfaces to the underlying C code, and “proves” that *only* `object$heights` is needed to compute the coefficient.
Because it grows with the number of observations, this measure should not be used to compare datasets of very different sizes.
### Usage
```
coefHier(object)
coef.hclust(object, ...)
## S3 method for class 'hclust'
coef(object, ...)
## S3 method for class 'twins'
coef(object, ...)
```
### Arguments
| | |
| --- | --- |
| `object` | an object of class `"hclust"` or `"twins"`, i.e., typically the result of `[hclust](../../stats/html/hclust)(.)`,`<agnes>(.)`, or `<diana>(.)`. Since `coef.hclust` only uses `object$heights`, and `object$merge`, `object` can be any list-like object with appropriate `merge` and `heights` components. For `coefHier`, even only `object$heights` is needed. |
| `...` | currently unused potential further arguments |
### Value
a number specifying the *agglomerative* (or *divisive* for `diana` objects) coefficient as defined by Kaufman and Rousseeuw, see `<agnes.object> $ ac` or `[diana.object](diana) $ dc`.
### Examples
```
data(agriculture)
aa <- agnes(agriculture)
coef(aa) # really just extracts aa$ac
coef(as.hclust(aa))# recomputes
coefHier(aa) # ditto
```
r None
`xclara` Bivariate Data Set with 3 Clusters
--------------------------------------------
### Description
An artificial data set consisting of 3000 points in 3 quite well-separated clusters.
### Usage
```
data(xclara)
```
### Format
A data frame with 3000 observations on 2 numeric variables (named `V1` and `V2`) giving the *x* and *y* coordinates of the points, respectively.
### Note
Our version of the `xclara` is slightly more rounded than the one from `[read.table](../../utils/html/read.table)("xclara.dat")` and the relative difference measured by `[all.equal](../../base/html/all.equal)` is `1.15e-7` for `V1` and `1.17e-7` for `V2` which suggests that our version has been the result of a `[options](../../base/html/options)(digits = 7)` formatting.
Previously (before May 2017), it was claimed the three cluster were each of size 1000, which is clearly wrong. `<pam>(*, 3)` gives cluster sizes of 899, 1149, and 952, which apart from seven “outliers” (or “mislabellings”) correspond to observation indices *1:900*, *901:2050*, and *2051:3000*, see the example.
### Source
Sample data set accompanying the reference below (file ‘xclara.dat’ in side ‘clus\_examples.tar.gz’).
### References
Anja Struyf, Mia Hubert & Peter J. Rousseeuw (1996) Clustering in an Object-Oriented Environment. *Journal of Statistical Software* **1**. doi: [10.18637/jss.v001.i04](https://doi.org/10.18637/jss.v001.i04)
### Examples
```
## Visualization: Assuming groups are defined as {1:1000}, {1001:2000}, {2001:3000}
plot(xclara, cex = 3/4, col = rep(1:3, each=1000))
p.ID <- c(78, 1411, 2535) ## PAM's medoid indices == pam(xclara, 3)$id.med
text(xclara[p.ID,], labels = 1:3, cex=2, col=1:3)
px <- pam(xclara, 3) ## takes ~2 seconds
cxcl <- px$clustering ; iCl <- split(seq_along(cxcl), cxcl)
boxplot(iCl, range = 0.7, horizontal=TRUE,
main = "Indices of the 3 clusters of pam(xclara, 3)")
## Look more closely now:
bxCl <- boxplot(iCl, range = 0.7, plot=FALSE)
## We see 3 + 2 + 2 = 7 clear "outlier"s or "wrong group" observations:
with(bxCl, rbind(out, group))
## out 1038 1451 1610 30 327 562 770
## group 1 1 1 2 2 3 3
## Apart from these, what are the robust ranges of indices? -- Robust range:
t(iR <- bxCl$stats[c(1,5),])
## 1 900
## 901 2050
## 2051 3000
gc <- adjustcolor("gray20",1/2)
abline(v = iR, col = gc, lty=3)
axis(3, at = c(0, iR[2,]), padj = 1.2, col=gc, col.axis=gc)
```
r None
`print.fanny` Print and Summary Methods for FANNY Objects
----------------------------------------------------------
### Description
Prints the objective function, membership coefficients and clustering vector of `fanny` object.
This is a method for the function `[print](../../base/html/print)()` for objects inheriting from class `<fanny>`.
### Usage
```
## S3 method for class 'fanny'
print(x, digits = getOption("digits"), ...)
## S3 method for class 'fanny'
summary(object, ...)
## S3 method for class 'summary.fanny'
print(x, digits = getOption("digits"), ...)
```
### Arguments
| | |
| --- | --- |
| `x, object` | a `<fanny>` object. |
| `digits` | number of significant digits for printing, see `[print.default](../../base/html/print.default)`. |
| `...` | potential further arguments (required by generic). |
### See Also
`<fanny>`, `<fanny.object>`, `[print](../../base/html/print)`, `[print.default](../../base/html/print.default)`.
r None
`print.agnes` Print Method for AGNES Objects
---------------------------------------------
### Description
Prints the call, agglomerative coefficient, ordering of objects and distances between merging clusters ('Height') of an `agnes` object.
This is a method for the generic `[print](../../base/html/print)()` function for objects inheriting from class `agnes`, see `<agnes.object>`.
### Usage
```
## S3 method for class 'agnes'
print(x, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | an agnes object. |
| `...` | potential further arguments (required by generic). |
### See Also
`<summary.agnes>` producing more output; `<agnes>`, `<agnes.object>`, `[print](../../base/html/print)`, `[print.default](../../base/html/print.default)`.
r None
`sizeDiss` Sample Size of Dissimilarity Like Object
----------------------------------------------------
### Description
Returns the number of observations (*sample size*) corresponding to a dissimilarity like object, or equivalently, the number of rows or columns of a matrix when only the lower or upper triangular part (without diagonal) is given.
It is nothing else but the inverse function of *f(n) = n(n-1)/2*.
### Usage
```
sizeDiss(d)
```
### Arguments
| | |
| --- | --- |
| `d` | any **R** object with length (typically) *n(n-1)/2*. |
### Value
a number; *n* if `length(d) == n(n-1)/2`, `NA` otherwise.
### See Also
`<dissimilarity.object>` and also `[as.dist](../../stats/html/dist)` for class `dissimilarity` and `dist` objects which have a `Size` attribute.
### Examples
```
sizeDiss(1:10)# 5, since 10 == 5 * (5 - 1) / 2
sizeDiss(1:9) # NA
n <- 1:100
stopifnot(n == sapply( n*(n-1)/2, function(n) sizeDiss(logical(n))))
```
r None
`print.diana` Print Method for DIANA Objects
---------------------------------------------
### Description
Prints the ordering of objects, diameters of splitted clusters, and divisive coefficient of a `diana` object.
This is a method for the function `[print](../../base/html/print)()` for objects inheriting from class `<diana>`.
### Usage
```
## S3 method for class 'diana'
print(x, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | a diana object. |
| `...` | potential further arguments (require by generic). |
### See Also
`<diana>`, `[diana.object](diana)`, `[print](../../base/html/print)`, `[print.default](../../base/html/print.default)`.
r None
`ruspini` Ruspini Data
-----------------------
### Description
The Ruspini data set, consisting of 75 points in four groups that is popular for illustrating clustering techniques.
### Usage
```
data(ruspini)
```
### Format
A data frame with 75 observations on 2 variables giving the x and y coordinates of the points, respectively.
### Source
E. H. Ruspini (1970) Numerical methods for fuzzy clustering. *Inform. Sci.* **2**, 319–350.
### References
see those in `<agnes>`.
### Examples
```
data(ruspini)
## Plot similar to Figure 4 in Stryuf et al (1996)
## Not run: plot(pam(ruspini, 4), ask = TRUE)
## Plot similar to Figure 6 in Stryuf et al (1996)
plot(fanny(ruspini, 5))
```
r None
`chorSub` Subset of C-horizon of Kola Data
-------------------------------------------
### Description
This is a small rounded subset of the C-horizon data `[chorizon](../../mvoutlier/html/chorizon)` from package mvoutlier.
### Usage
```
data(chorSub)
```
### Format
A data frame with 61 observations on 10 variables. The variables contain scaled concentrations of chemical elements.
### Details
This data set was produced from `chorizon` via these statements:
```
data(chorizon, package = "mvoutlier")
chorSub <- round(100*scale(chorizon[,101:110]))[190:250,]
storage.mode(chorSub) <- "integer"
colnames(chorSub) <- gsub("_.*", '', colnames(chorSub))
```
### Source
Kola Project (1993-1998)
### See Also
`[chorizon](../../mvoutlier/html/chorizon)` in package mvoutlier and other Kola data in the same package.
### Examples
```
data(chorSub)
summary(chorSub)
pairs(chorSub, gap= .1)# some outliers
```
r None
`children` Low-level Functions for Management of Forked Processes
------------------------------------------------------------------
### Description
These are low-level support functions for the forking approach.
They are not available on Windows, and not exported from the namespace.
### Usage
```
children(select)
readChild(child)
readChildren(timeout = 0)
selectChildren(children = NULL, timeout = 0)
sendChildStdin(child, what)
sendMaster(what, raw.asis = TRUE)
mckill(process, signal = 2L)
```
### Arguments
| | |
| --- | --- |
| `select` | if omitted, all active children are returned, otherwise `select` should be a list of processes and only those from the list that are active will be returned. |
| `child` | child process (object of the class `"childProcess"`) or a process ID (pid). See also ‘Details’. |
| `timeout` | timeout (in seconds, fractions supported) to wait for a response before giving up. |
| `children` | list of child processes or a single child process object or a vector of process IDs or `NULL`. If `NULL` behaves as if all currently known children were supplied. |
| `what` | For `sendChildStdin`: Character or raw vector. In the former case elements are collapsed using the newline character. (But no trailing newline is added at the end!) For `sendMaster`: Data to send to the master process. If `what` is not a raw vector, it will be serialized into a raw vector. Do NOT send an empty raw vector – that is reserved for internal use. |
| `raw.asis` | logical, if `TRUE` and `what` is a raw vector then it is sent directly as-is to the master (default, suitable for arbitrary payload passing), otherwise raw vectors are serialized before sending just as any other objects (suitable for passing evaluation results). |
| `process` | process (object of the class `process`) or a process ID (pid) |
| `signal` | integer: signal to send. Values of 2 (SIGINT), 9 (SIGKILL) and 15 (SIGTERM) are pretty much portable, but for maximal portability use `tools::[SIGTERM](../../tools/html/pskill)` and so on. |
### Details
`children` returns currently active children.
`readChild` reads data (sent by `sendMaster`) from a given child process.
`selectChildren` checks children for available data.
`readChildren` checks all children for available data and reads from the first child that has available data.
`sendChildStdin` sends a string (or data) to one or more child's standard input. Note that if the master session was interactive, it will also be echoed on the standard output of the master process (unless disabled). The function is vector-compatible, so you can specify `child` as a list or a vector of process IDs.
`sendMaster` sends data from the child to the master process.
`mckill` sends a signal to a child process: it is equivalent to `[pskill](../../tools/html/pskill)` in package tools.
### Value
`children` returns a (possibly empty) list of objects of class `"process"`, the process ID.
`readChild` and `readChildren` return a raw vector with a `"pid"` attribute if data were available, an integer vector of length one with the process ID if a child terminated or `NULL` if the child no longer exists (no children at all for `readChildren`).
`selectChildren` returns `TRUE` is the timeout was reached, `FALSE` if an error occurred (e.g., if the master process was interrupted) or an integer vector of process IDs with children that have data available, or `NULL` if there are no children.
`sendChildStdin` returns a vector of `TRUE` values (one for each member of `child`) or throws an error.
`sendMaster` returns `TRUE` or throws an error.
`mckill` returns `TRUE`.
### Warning
This is a very low-level interface for expert use only: it not regarded as part of the **R** API and subject to change without notice.
`sendMaster`, `readChild` and `sendChildStdin` did not support long vectors prior to **R** 3.4.0 and so were limited to *2^31 - 1* bytes (and still are on 32-bit platforms).
### Author(s)
Simon Urbanek and R Core.
Derived from the multicore package formerly on CRAN.
### See Also
`<mcfork>`, `[sendMaster](children)`, `<mcparallel>`
### Examples
```
## Not run:
p <- mcparallel(scan(n = 1, quiet = TRUE))
sendChildStdin(p, "17.4\n")
mccollect(p)[[1]]
## End(Not run)
```
r None
`mcaffinity` Get or Set CPU Affinity Mask of the Current Process
-----------------------------------------------------------------
### Description
`mcaffinity` retrieves or sets the CPU affinity mask of the current process, i.e., the set of CPUs the process is allowed to be run on. (CPU here means logical CPU which can be CPU, core or hyperthread unit.)
### Usage
```
mcaffinity(affinity = NULL)
```
### Arguments
| | |
| --- | --- |
| `affinity` | specification of the CPUs to lock this process to (numeric vector) or `NULL` if no change is requested |
### Details
`mcaffinity` can be used to obtain (`affinity = NULL`) or set the CPU affinity mask of the current process. The affinity mask is a list of integer CPU identifiers (starting from 1) that this process is allowed to run on. Not all systems provide user access to the process CPU affinity, in cases where no support is present at all `mcaffinity()` will return `NULL`. Some systems may take into account only the number of CPUs present in the mask.
Typically, it is legal to specify larger set than the number of logical CPUs (but at most as many as the OS can handle) and the system will return back the actually present set.
### Value
`NULL` if CPU affinity is not supported by the system or an integer vector with the set of CPUs in the active affinity mask for this process (this may be different than `affinity`).
### Author(s)
Simon Urbanek.
### See Also
`<mcparallel>`
r None
`mcparallel` Evaluate an R Expression Asynchronously in a Separate Process
---------------------------------------------------------------------------
### Description
These functions are based on forking and so are not available on Windows.
`mcparallel` starts a parallel **R** process which evaluates the given expression.
`mccollect` collects results from one or more parallel processes.
### Usage
```
mcparallel(expr, name, mc.set.seed = TRUE, silent = FALSE,
mc.affinity = NULL, mc.interactive = FALSE,
detached = FALSE)
mccollect(jobs, wait = TRUE, timeout = 0, intermediate = FALSE)
```
### Arguments
| | |
| --- | --- |
| `expr` | expression to evaluate (do *not* use any on-screen devices or GUI elements in this code, see `<mcfork>` for the inadvisability of using `mcparallel` with GUI front-ends and multi-threaded libraries). Raw vectors are reserved for internal use and cannot be returned, but the expression may evaluate e.g. to a list holding a raw vector. `NULL` should not be returned because it is used by `mccollect` to signal an error. |
| `name` | an optional name (character vector of length one) that can be associated with the job. |
| `mc.set.seed` | logical: see section ‘Random numbers’. |
| `silent` | if set to `TRUE` then all output on stdout will be suppressed (stderr is not affected). |
| `mc.affinity` | either a numeric vector specifying CPUs to restrict the child process to (1-based) or `NULL` to not modify the CPU affinity |
| `mc.interactive` | logical, if `TRUE` or `FALSE` then the child process will be set as interactive or non-interactive respectively. If `NA` then the child process will inherit the interactive flag from the parent. |
| `detached` | logical, if `TRUE` then the job is detached from the current session and cannot deliver any results back - it is used for the code side-effect only. |
| `jobs` | list of jobs (or a single job) to collect results for. Alternatively `jobs` can also be an integer vector of process IDs. If omitted `collect` will wait for all currently existing children. |
| `wait` | if set to `FALSE` it checks for any results that are available within `timeout` seconds from now, otherwise it waits for all specified jobs to finish. |
| `timeout` | timeout (in seconds) to check for job results – applies only if `wait` is `FALSE`. |
| `intermediate` | `FALSE` or a function which will be called while `collect` waits for results. The function will be called with one parameter which is the list of results received so far. |
### Details
`mcparallel` evaluates the `expr` expression in parallel to the current **R** process. Everything is shared read-only (or in fact copy-on-write) between the parallel process and the current process, i.e. no side-effects of the expression affect the main process. The result of the parallel execution can be collected using `mccollect` function.
`mccollect` function collects any available results from parallel jobs (or in fact any child process). If `wait` is `TRUE` then `collect` waits for all specified jobs to finish before returning a list containing the last reported result for each job. If `wait` is `FALSE` then `mccollect` merely checks for any results available at the moment and will not wait for jobs to finish. If `jobs` is specified, jobs not listed there will not be affected or acted upon.
Note: If `expr` uses low-level multicore functions such as `[sendMaster](children)` a single job can deliver results multiple times and it is the responsibility of the user to interpret them correctly. `mccollect` will return `NULL` for a terminating job that has sent its results already after which the job is no longer available.
Jobs are identified by process IDs (even when referred to as job objects), which are reused by the operating system. Detached jobs created by `mcparallel` can thus never be safely referred to by their process IDs nor job objects. Non-detached jobs are guaranteed to exist until collected by `mccollect`, even if crashed or terminated by a signal. Once collected by `mccollect`, a job is regarded as detached, and thus must no longer be referred to by its process ID nor its job object. With `wait = TRUE`, all jobs passed to `mccollect` are collected. With `wait = FALSE`, the collected jobs are given as names of the result vector, and thus in subsequent calls to `mccollect` these jobs must be excluded. Job objects should be used in preference of process IDs whenever accepted by the API.
The `mc.affinity` parameter can be used to try to restrict the child process to specific CPUs. The availability and the extent of this feature is system-dependent (e.g., some systems will only consider the CPU count, others will ignore it completely).
### Value
`mcparallel` returns an object of the class `"parallelJob"` which inherits from `"childProcess"` (see the ‘Value’ section of the help for `<mcfork>`). If argument `name` was supplied this will have an additional component `name`.
`mccollect` returns any results that are available in a list. The results will have the same order as the specified jobs. If there are multiple jobs and a job has a name it will be used to name the result, otherwise its process ID will be used. If none of the specified children are still running, it returns `NULL`.
### Random numbers
If `mc.set.seed = FALSE`, the child process has the same initial random number generator (RNG) state as the current **R** session. If the RNG has been used (or `.Random.seed` was restored from a saved workspace), the child will start drawing random numbers at the same point as the current session. If the RNG has not yet been used, the child will set a seed based on the time and process ID when it first uses the RNG: this is pretty much guaranteed to give a different random-number stream from the current session and any other child process.
The behaviour with `mc.set.seed = TRUE` is different only if `[RNGkind](../../base/html/random)("L'Ecuyer-CMRG")` has been selected. Then each time a child is forked it is given the next stream (see `[nextRNGStream](rngstream)`). So if you select that generator, set a seed and call `[mc.reset.stream](rngstream)` just before the first use of `mcparallel` the results of simulations will be reproducible provided the same tasks are given to the first, second, ... forked process.
### Note
Prior to **R** 3.4.0 and on a 32-bit platform, the [serialize](../../base/html/serialize)d result from each forked process is limited to *2^31 - 1* bytes. (Returning very large results via serialization is inefficient and should be avoided.)
### Author(s)
Simon Urbanek and R Core.
Derived from the multicore package formerly on CRAN. (but with different handling of the RNG stream).
### See Also
`<pvec>`, `<mclapply>`
### Examples
```
p <- mcparallel(1:10)
q <- mcparallel(1:20)
# wait for both jobs to finish and collect all results
res <- mccollect(list(p, q))
## IGNORE_RDIFF_BEGIN
## reports process ids, so not reproducible
p <- mcparallel(1:10)
mccollect(p, wait = FALSE, 10) # will retrieve the result (since it's fast)
mccollect(p, wait = FALSE) # will signal the job as terminating
mccollect(p, wait = FALSE) # there is no longer such a job
## IGNORE_RDIFF_END
# a naive parallel lapply can be created using mcparallel alone:
jobs <- lapply(1:10, function(x) mcparallel(rnorm(x), name = x))
mccollect(jobs)
```
| programming_docs |
r None
`makeCluster` Create a Parallel Socket Cluster
-----------------------------------------------
### Description
Creates a set of copies of **R** running in parallel and communicating over sockets.
### Usage
```
makeCluster(spec, type, ...)
makePSOCKcluster(names, ...)
makeForkCluster(nnodes = getOption("mc.cores", 2L), ...)
stopCluster(cl = NULL)
setDefaultCluster(cl = NULL)
getDefaultCluster()
```
### Arguments
| | |
| --- | --- |
| `spec` | A specification appropriate to the type of cluster. |
| `names` | Either a character vector of host names on which to run the worker copies of **R**, or a positive integer (in which case that number of copies is run on localhost). |
| `nnodes` | The number of nodes to be forked. |
| `type` | One of the supported types: see ‘Details’. |
| `...` | Options to be passed to the function spawning the workers. See ‘Details’. |
| `cl` | an object of class `"cluster"`. |
### Details
`makeCluster` creates a cluster of one of the supported types. The default type, `"PSOCK"`, calls `makePSOCKcluster`. Type `"FORK"` calls `makeForkCluster`. Other types are passed to package [snow](https://CRAN.R-project.org/package=snow).
`makePSOCKcluster` is an enhanced version of `makeSOCKcluster` in package [snow](https://CRAN.R-project.org/package=snow). It runs `Rscript` on the specified host(s) to set up a worker process which listens on a socket for expressions to evaluate, and returns the results (as serialized objects).
`makeForkCluster` is merely a stub on Windows. On Unix-alike platforms it creates the worker process by forking.
The workers are most often running on the same host as the master, when no options need be set.
Several options are supported (mainly for `makePSOCKcluster`):
`master`
The host name of the master, as known to the workers. This may not be the same as it is known to the master, and on private subnets it may be necessary to specify this as a numeric IP address. For example, macOS is likely to detect a machine as somename.local, a name known only to itself.
`port`
The port number for the socket connection, default taken from the environment variable R\_PARALLEL\_PORT, then a randomly chosen port in the range `11000:11999`.
`timeout`
The timeout in seconds for that port. This is the maximum time of zero communication between master and worker before failing. Default is 30 days (and the POSIX standard only requires values up to 31 days to be supported).
`setup_timeout`
The maximum number of seconds a worker attempts to connect to master before failing. Default is 2 minutes. The waiting time before the next attempt starts at 0.1 seconds and is incremented 50% after each retry.
`outfile`
Where to direct the `[stdout](../../base/html/showconnections)` and `[stderr](../../base/html/showconnections)` connection output from the workers. `""` indicates no redirection (which may only be useful for workers on the local machine). Defaults to ‘/dev/null’ (‘nul:’ on Windows). The other possibility is a file path on the worker's host. Files will be opened in append mode, as all workers log to the same file.
`homogeneous`
Logical, default true. See ‘Note’.
`rscript`
See ‘Note’.
`rscript_args`
Character vector of additional arguments for `Rscript` such as --no-environ.
`renice`
A numerical ‘niceness’ to set for the worker processes, e.g. `15` for a low priority. OS-dependent: see `[psnice](../../tools/html/psnice)` for details.
`rshcmd`
The command to be run on the master to launch a process on another host. Defaults to `ssh`.
`user`
The user name to be used when communicating with another host.
`manual`
Logical. If true the workers will need to be run manually.
`methods`
Logical. If true (default) the workers will load the methods package: not loading it saves ca 30% of the startup CPU time of the cluster.
`useXDR`
Logical. If true (default) serialization will use XDR: where large amounts of data are to be transferred and all the nodes are little-endian, communication may be substantially faster if this is set to false.
`setup_strategy`
Character. If `"parallel"` (default) workers will be started in parallel during cluster setup when this is possible, which is now for homogeneous `"PSOCK"` clusters with all workers started automatically (`manual = FALSE`) on the local machine. Workers will be started sequentially on other clusters, on all clusters with `setup_strategy = "sequential"` and on **R** 3.6.0 and older. This option is for expert use only (e.g. debugging) and may be removed in future versions of R.
Function `makeForkCluster` creates a socket cluster by forking (and hence is not available on Windows). It supports options `port`, `timeout` and `outfile`, and always uses `useXDR = FALSE`. It is *strongly discouraged* to use the `"FORK"` cluster with GUI front-ends or multi-threaded libraries. See `<mcfork>` for details.
It is good practice to shut down the workers by calling `[stopCluster](makecluster)`: however the workers will terminate themselves once the socket on which they are listening for commands becomes unavailable, which it should if the master **R** session is completed (or its process dies).
Function `setDefaultCluster` registers a cluster as the default one for the current session. Using `setDefaultCluster(NULL)` removes the registered cluster, as does stopping that cluster.
### Value
For the cluster creators, an object of class `c("SOCKcluster", "cluster")`.
For the default cluster setter and getter, the registered default cluster or `NULL` if there is no such cluster.
### Note
Option `homogeneous = TRUE` was for years documented as ‘Are all the hosts running identical setups?’, but this was apparently more restrictive than its author intended and not required by the code.
The current interpretation of `homogeneous = TRUE` is that `Rscript` can be launched using the same path on each worker. That path is given by the option `rscript` and defaults to the full path to `Rscript` on the master. (The workers are not required to be running the same version of **R** as the master, nor even as each other.)
For `homogeneous = FALSE`, `Rscript` on the workers is found on their default shell's path.
For the very common usage of running both master and worker on a single multi-core host, the default settings are the appropriate ones.
### Author(s)
Luke Tierney and R Core.
Derived from the [snow](https://CRAN.R-project.org/package=snow) package.
r None
`clusterApply` Apply Operations using Clusters
-----------------------------------------------
### Description
These functions provide several ways to parallelize computations using a cluster.
### Usage
```
clusterCall(cl = NULL, fun, ...)
clusterApply(cl = NULL, x, fun, ...)
clusterApplyLB(cl = NULL, x, fun, ...)
clusterEvalQ(cl = NULL, expr)
clusterExport(cl = NULL, varlist, envir = .GlobalEnv)
clusterMap(cl = NULL, fun, ..., MoreArgs = NULL, RECYCLE = TRUE,
SIMPLIFY = FALSE, USE.NAMES = TRUE,
.scheduling = c("static", "dynamic"))
clusterSplit(cl = NULL, seq)
parLapply(cl = NULL, X, fun, ..., chunk.size = NULL)
parSapply(cl = NULL, X, FUN, ..., simplify = TRUE,
USE.NAMES = TRUE, chunk.size = NULL)
parApply(cl = NULL, X, MARGIN, FUN, ..., chunk.size = NULL)
parRapply(cl = NULL, x, FUN, ..., chunk.size = NULL)
parCapply(cl = NULL, x, FUN, ..., chunk.size = NULL)
parLapplyLB(cl = NULL, X, fun, ..., chunk.size = NULL)
parSapplyLB(cl = NULL, X, FUN, ..., simplify = TRUE,
USE.NAMES = TRUE, chunk.size = NULL)
```
### Arguments
| | |
| --- | --- |
| `cl` | a cluster object, created by this package or by package [snow](https://CRAN.R-project.org/package=snow). If `NULL`, use the registered default cluster. |
| `fun, FUN` | function or character string naming a function. |
| `expr` | expression to evaluate. |
| `seq` | vector to split. |
| `varlist` | character vector of names of objects to export. |
| `envir` | environment from which t export variables |
| `x` | a vector for `clusterApply` and `clusterApplyLB`, a matrix for `parRapply` and `parCapply`. |
| `...` | additional arguments to pass to `fun` or `FUN`: beware of partial matching to earlier arguments. |
| `MoreArgs` | additional arguments for `fun`. |
| `RECYCLE` | logical; if true shorter arguments are recycled. |
| `X` | A vector (atomic or list) for `parLapply` and `parSapply`, an array for `parApply`. |
| `chunk.size` | scalar number; number of invocations of `fun` or `FUN` in one chunk; a chunk is a unit for scheduling. |
| `MARGIN` | vector specifying the dimensions to use. |
| `simplify, USE.NAMES` | logical; see `[sapply](../../base/html/lapply)`. |
| `SIMPLIFY` | logical; see `[mapply](../../base/html/mapply)`. |
| `.scheduling` | should tasks be statically allocated to nodes or dynamic load-balancing used? |
### Details
`clusterCall` calls a function `fun` with identical arguments `...` on each node.
`clusterEvalQ` evaluates a literal expression on each cluster node. It is a parallel version of `[evalq](../../base/html/eval)`, and is a convenience function invoking `clusterCall`.
`clusterApply` calls `fun` on the first node with arguments `x[[1]]` and `...`, on the second node with `x[[2]]` and `...`, and so on, recycling nodes as needed.
`clusterApplyLB` is a load balancing version of `clusterApply`. If the length `n` of `x` is not greater than the number of nodes `p`, then a job is sent to `n` nodes. Otherwise the first `p` jobs are placed in order on the `p` nodes. When the first job completes, the next job is placed on the node that has become free; this continues until all jobs are complete. Using `clusterApplyLB` can result in better cluster utilization than using `clusterApply`, but increased communication can reduce performance. Furthermore, the node that executes a particular job is non-deterministic. This means that simulations that assign RNG streams to nodes will not be reproducible.
`clusterMap` is a multi-argument version of `clusterApply`, analogous to `[mapply](../../base/html/mapply)` and `[Map](../../base/html/funprog)`. If `RECYCLE` is true shorter arguments are recycled (and either none or all must be of length zero); otherwise, the result length is the length of the shortest argument. Nodes are recycled if the length of the result is greater than the number of nodes. (`mapply` always uses `RECYCLE = TRUE`, and has argument `SIMPLIFY = TRUE`. `Map` always uses `RECYCLE = TRUE`.)
`clusterExport` assigns the values on the master **R** process of the variables named in `varlist` to variables of the same names in the global environment (aka ‘workspace’) of each node. The environment on the master from which variables are exported defaults to the global environment.
`clusterSplit` splits `seq` into a consecutive piece for each cluster and returns the result as a list with length equal to the number of nodes. Currently the pieces are chosen to be close to equal in length: the computation is done on the master.
`parLapply`, `parSapply`, and `parApply` are parallel versions of `lapply`, `sapply` and `apply`. Chunks of computation are statically allocated to nodes using `clusterApply`. By default, the number of chunks is the same as the number of nodes. `parLapplyLB`, `parSapplyLB` are load-balancing versions, intended for use when applying `FUN` to different elements of `X` takes quite variable amounts of time, and either the function is deterministic or reproducible results are not required. Chunks of computation are allocated dynamically to nodes using `clusterApplyLB`. From **R** 3.5.0, the default number of chunks is twice the number of nodes. Before **R** 3.5.0, the (fixed) number of chunks was the same as the number of nodes. As for `clusterApplyLB`, with load balancing the node that executes a particular job is non-deterministic and simulations that assign RNG streams to nodes will not be reproducible.
`parRapply` and `parCapply` are parallel row and column `apply` functions for a matrix `x`; they may be slightly more efficient than `parApply` but do less post-processing of the result.
A chunk size of `0` with static scheduling uses the default (one chunk per node). With dynamic scheduling, chunk size of `0` has the same effect as `1` (one invocation of `FUN`/`fun` per chunk).
### Value
For `clusterCall`, `clusterEvalQ` and `clusterSplit`, a list with one element per node.
For `clusterApply` and `clusterApplyLB`, a list the same length as `x`.
`clusterMap` follows `[mapply](../../base/html/mapply)`.
`clusterExport` returns nothing.
`parLapply` returns a list the length of `X`.
`parSapply` and `parApply` follow `[sapply](../../base/html/lapply)` and `[apply](../../base/html/apply)` respectively.
`parRapply` and `parCapply` always return a vector. If `FUN` always returns a scalar result this will be of length the number of rows or columns: otherwise it will be the concatenation of the returned values.
An error is signalled on the master if any of the workers produces an error.
### Note
These functions are almost identical to those in package [snow](https://CRAN.R-project.org/package=snow).
Two exceptions: `parLapply` has argument `X` not `x` for consistency with `[lapply](../../base/html/lapply)`, and `parSapply` has been updated to match `[sapply](../../base/html/lapply)`.
### Author(s)
Luke Tierney and R Core.
Derived from the [snow](https://CRAN.R-project.org/package=snow) package.
### Examples
```
## Use option cl.cores to choose an appropriate cluster size.
cl <- makeCluster(getOption("cl.cores", 2))
clusterApply(cl, 1:2, get("+"), 3)
xx <- 1
clusterExport(cl, "xx")
clusterCall(cl, function(y) xx + y, 2)
## Use clusterMap like an mapply example
clusterMap(cl, function(x, y) seq_len(x) + y,
c(a = 1, b = 2, c = 3), c(A = 10, B = 0, C = -10))
parSapply(cl, 1:20, get("+"), 3)
## A bootstrapping example, which can be done in many ways:
clusterEvalQ(cl, {
## set up each worker. Could also use clusterExport()
library(boot)
cd4.rg <- function(data, mle) MASS::mvrnorm(nrow(data), mle$m, mle$v)
cd4.mle <- list(m = colMeans(cd4), v = var(cd4))
NULL
})
res <- clusterEvalQ(cl, boot(cd4, corr, R = 100,
sim = "parametric", ran.gen = cd4.rg, mle = cd4.mle))
library(boot)
cd4.boot <- do.call(c, res)
boot.ci(cd4.boot, type = c("norm", "basic", "perc"),
conf = 0.9, h = atanh, hinv = tanh)
stopCluster(cl)
## or
library(boot)
run1 <- function(...) {
library(boot)
cd4.rg <- function(data, mle) MASS::mvrnorm(nrow(data), mle$m, mle$v)
cd4.mle <- list(m = colMeans(cd4), v = var(cd4))
boot(cd4, corr, R = 500, sim = "parametric",
ran.gen = cd4.rg, mle = cd4.mle)
}
cl <- makeCluster(mc <- getOption("cl.cores", 2))
## to make this reproducible
clusterSetRNGStream(cl, 123)
cd4.boot <- do.call(c, parLapply(cl, seq_len(mc), run1))
boot.ci(cd4.boot, type = c("norm", "basic", "perc"),
conf = 0.9, h = atanh, hinv = tanh)
stopCluster(cl)
```
r None
`RngStream` Implementation of Pierre L'Ecuyer's RngStreams
-----------------------------------------------------------
### Description
This is an **R** re-implementation of Pierre L'Ecuyer's ‘RngStreams’ multiple streams of pseudo-random numbers.
### Usage
```
nextRNGStream(seed)
nextRNGSubStream(seed)
clusterSetRNGStream(cl = NULL, iseed)
mc.reset.stream()
```
### Arguments
| | |
| --- | --- |
| `seed` | An integer vector of length 7 as given by `.Random.seed` when the "L'Ecuyer-CMRG" RNG is in use. See `[RNG](../../base/html/random)` for the valid values. |
| `cl` | A cluster from this package or package [snow](https://CRAN.R-project.org/package=snow), or (if `NULL`) the registered cluster. |
| `iseed` | An integer to be supplied to `[set.seed](../../base/html/random)`, or `NULL` not to set reproducible seeds. |
### Details
The ‘RngStream’ interface works with (potentially) multiple streams of pseudo-random numbers: this is particularly suitable for working with parallel computations since each task can be assigned a separate RNG stream.
This uses as its underlying generator `RNGkind("L'Ecuyer-CMRG")`, of L'Ecuyer (1999), which has a seed vector of 6 (signed) integers and a period of around *2^191*. Each ‘stream’ is a subsequence of the period of length *2^127* which is in turn divided into ‘substreams’ of length *2^76*.
The idea of L'Ecuyer *et al* (2002) is to use a separate stream for each of the parallel computations (which ensures that the random numbers generated never get into to sync) and the parallel computations can themselves use substreams if required. The original interface stores the original seed of the first stream, the original seed of the current stream and the current seed: this could be implemented in **R**, but it is as easy to work by saving the relevant values of `.Random.seed`: see the examples.
`clusterSetRNGStream` selects the `"L'Ecuyer-CMRG"` RNG and then distributes streams to the members of a cluster, optionally setting the seed of the streams by `set.seed(iseed)` (otherwise they are set from the current seed of the master process: after selecting the L'Ecuyer generator).
When not on Windows, Calling `mc.reset.stream()` after setting the L'Ecuyer random number generator and seed makes runs from `<mcparallel>(mc.set.seed = TRUE)` reproducible. This is done internally in `<mclapply>` and `<pvec>`. (Note that it does not set the seed in the master process, so does not affect the fallback-to-serial versions of these functions.)
### Value
For `nextRNGStream` and `nextRNGSubStream`, a value which can be assigned to `.Random.seed`.
### Note
Interfaces to L'Ecuyer's C code are available in CRAN packages [rlecuyer](https://CRAN.R-project.org/package=rlecuyer) and [rstream](https://CRAN.R-project.org/package=rstream).
### Author(s)
Brian Ripley
### References
L'Ecuyer, P. (1999). Good parameters and implementations for combined multiple recursive random number generators. *Operations Research*, **47**, 159–164. doi: [10.1287/opre.47.1.159](https://doi.org/10.1287/opre.47.1.159).
L'Ecuyer, P., Simard, R., Chen, E. J. and Kelton, W. D. (2002). An object-oriented random-number package with many long streams and substreams. *Operations Research*, **50**, 1073–1075. doi: [10.1287/opre.50.6.1073.358](https://doi.org/10.1287/opre.50.6.1073.358).
### See Also
`[RNG](../../base/html/random)` for fuller details of **R**'s built-in random number generators.
The vignette for package parallel.
### Examples
```
RNGkind("L'Ecuyer-CMRG")
set.seed(123)
(s <- .Random.seed)
## do some work involving random numbers.
nextRNGStream(s)
nextRNGSubStream(s)
```
r None
`detectCores` Detect the Number of CPU Cores
---------------------------------------------
### Description
Attempt to detect the number of CPU cores on the current host.
### Usage
```
detectCores(all.tests = FALSE, logical = TRUE)
```
### Arguments
| | |
| --- | --- |
| `all.tests` | Logical: if true apply all known tests. |
| `logical` | Logical: if possible, use the number of physical CPUs/cores (if `FALSE`) or logical CPUs (if `TRUE`). Currently this is honoured only on macOS, Solaris and Windows. |
### Details
This attempts to detect the number of available CPU cores.
It has methods to do so for Linux, macOS, FreeBSD, OpenBSD, Solaris and Windows. `detectCores(TRUE)` could be tried on other Unix-alike systems.
### Value
An integer, `NA` if the answer is unknown.
Exactly what this represents is OS-dependent: where possible by default it counts logical (e.g., hyperthreaded) CPUs and not physical cores or packages.
Under macOS there is a further distinction between ‘available in the current power management mode’ and ‘could be available this boot’, and this function returns the first.
On Windows: Only versions of Windows since XP SP3 are supported. Microsoft documents that with `logical = FALSE` it will report the number of cores on Vista or later, but the number of physical CPU packages on XP or Server 2003: however it reported correctly on the XP systems we tested.
On Sparc Solaris `logical = FALSE` returns the number of physical cores and `logical = TRUE` returns the number of available hardware threads. (Some Sparc CPUs have multiple cores per CPU, others have multiple threads per core and some have both.) For example, the UltraSparc T2 CPU in the former CRAN check server was a single physical CPU with 8 cores, and each core supports 8 hardware threads. So `detectCores(logical = FALSE)` returns 8, and `detectCores(logical = TRUE)` returns 64.
Where virtual machines are in use, one would hope that the result for `logical = TRUE` represents the number of CPUs available (or potentially available) to that particular VM.
### Note
This is not suitable for use directly for the `mc.cores` argument of `mclapply` nor specifying the number of cores in `makeCluster`. First because it may return `NA`, second because it does not give the number of *allowed* cores, and third because on Sparc Solaris and some Windows boxes it is not reasonable to try to use all the logical CPUs at once.
### Author(s)
Simon Urbanek and Brian Ripley
### Examples
```
detectCores()
detectCores(logical = FALSE)
```
| programming_docs |
r None
`parallel-package` Support for Parallel Computation
----------------------------------------------------
### Description
Support for parallel computation, including random-number generation.
### Details
This package was first included with **R** 2.14.0 in 2011.
There is support for multiple RNG streams with the "L'Ecuyer-CMRG" [RNG](../../base/html/random): see `[nextRNGStream](rngstream)`.
It contains functionality derived from and pretty much equivalent to that contained in packages multicore (formerly on CRAN, with some low-level functions renamed and not exported) and [snow](https://CRAN.R-project.org/package=snow) (for socket clusters only, but MPI and NWS clusters generated by [snow](https://CRAN.R-project.org/package=snow) are also supported). There have been many enhancements and bug fixes since 2011.
This package also provides `[makeForkCluster](makecluster)` to create socket clusters by forking (not Windows).
For a complete list of exported functions, use `library(help = "parallel")`.
### Author(s)
Brian Ripley, Luke Tierney and Simon Urbanek
Maintainer: R Core Team [[email protected]](mailto:[email protected])
### See Also
Parallel computation involves launching worker processes: functions `[psnice](../../tools/html/psnice)` and `[pskill](../../tools/html/pskill)` in package tools provide means to manage such processes.
r None
`mcfork` Fork a Copy of the Current R Process
----------------------------------------------
### Description
These are low-level functions, not available on Windows, and not exported from the namespace.
`mcfork` creates a new child process as a copy of the current **R** process.
`mcexit` closes the current child process, informing the master process as necessary.
### Usage
```
mcfork(estranged = FALSE)
mcexit(exit.code = 0L, send = NULL)
```
### Arguments
| | |
| --- | --- |
| `estranged` | logical, if `TRUE` then the new process has no ties to the parent process, will not show in the list of children and will not be killed on exit. |
| `exit.code` | process exit code. By convention `0L` signifies a clean exit, `1L` an error. |
| `send` | if not `NULL` send this data before exiting (equivalent to using `[sendMaster](children)`). |
### Details
The `mcfork` function provides an interface to the `fork` system call. In addition it sets up a pipe between the master and child process that can be used to send data from the child process to the master (see `[sendMaster](children)`) and child's ‘stdin’ is re-mapped to another pipe held by the master process (see `[sendChildStdin](children)`).
If you are not familiar with the `fork` system call, do not use this function directly as it leads to very complex inter-process interactions amongst the **R** processes involved.
In a nutshell `fork` spawns a copy (child) of the current process, that can work in parallel to the master (parent) process. At the point of forking both processes share exactly the same state including the workspace, global options, loaded packages etc. Forking is relatively cheap in modern operating systems and no real copy of the used memory is created, instead both processes share the same memory and only modified parts are copied. This makes `mcfork` an ideal tool for parallel processing since there is no need to setup the parallel working environment, data and code is shared automatically from the start.
`mcexit` is to be run in the child process. It sends `send` to the master (unless `NULL`) and then shuts down the child process. The child can also be shut down by sending it the signal `SIGUSR1`, as is done by the unexported function `parallel:::rmChild`.
### Value
`mcfork` returns an object of the class `"childProcess"` to the master and of class `"masterProcess"` to the child: both the classes inherit from class `"process"`. If `estranged` is set to `TRUE` then the child process will be of the class `"estrangedProcess"` and cannot communicate with the master process nor will it show up on the list of children. These are lists with components `pid` (the process id of the *other* process) and a vector `fd` of the two file descriptor numbers for ends in the current process of the inter-process pipes.
`mcexit` never returns.
### GUI/embedded environments
It is *strongly discouraged* to use `mcfork` and the higher-level functions which rely on it (e.g., `mcparallel`, `mclapply` and `pvec`) in GUI or embedded environments, because it leads to several processes sharing the same GUI which will likely cause chaos (and possibly crashes). Child processes should never use on-screen graphics devices. Some precautions have been taken to make this usable in `R.app` on macOS, but users of third-party front-ends should consult their documentation.
This can also apply to other connections (e.g., to an X server) created before forking, and to files opened by e.g. graphics devices.
Note that tcltk counts as a GUI for these purposes since `Tcl` runs an event loop. That event loop is inhibited in a child process but there could still be problems with Tk graphical connections.
It is *strongly discouraged* to use `mcfork` and the higher-level functions in any multi-threaded R process (with additional threads created by a third-party library or package). Such use can lead to deadlocks or crashes, because the child process created by `mcfork` may not be able to access resources locked in the parent or may see an inconsistent version of global data (`mcfork` runs system call `fork` without `exec`).
If in doubt, it is safer to use a non-FORK cluster (see `[makeCluster](makecluster)`, `[clusterApply](clusterapply)`).
### Warning
This is a very low-level API for expert use only.
### Author(s)
Simon Urbanek and R Core.
Derived from the multicore package formerly on CRAN.
### See Also
`<mcparallel>`, `[sendMaster](children)`
### Examples
```
## This will work when run as an example, but not when pasted in.
p <- parallel:::mcfork()
if (inherits(p, "masterProcess")) {
cat("I'm a child! ", Sys.getpid(), "\n")
parallel:::mcexit(,"I was a child")
}
cat("I'm the master\n")
unserialize(parallel:::readChildren(1.5))
```
r None
`mclapply` Parallel Versions of lapply and mapply using Forking
----------------------------------------------------------------
### Description
`mclapply` is a parallelized version of `[lapply](../../base/html/lapply)`, it returns a list of the same length as `X`, each element of which is the result of applying `FUN` to the corresponding element of `X`.
It relies on forking and hence is not available on Windows unless `mc.cores = 1`.
`mcmapply` is a parallelized version of `[mapply](../../base/html/mapply)`, and `mcMap` corresponds to `[Map](../../base/html/funprog)`.
### Usage
```
mclapply(X, FUN, ...,
mc.preschedule = TRUE, mc.set.seed = TRUE,
mc.silent = FALSE, mc.cores = getOption("mc.cores", 2L),
mc.cleanup = TRUE, mc.allow.recursive = TRUE, affinity.list = NULL)
mcmapply(FUN, ...,
MoreArgs = NULL, SIMPLIFY = TRUE, USE.NAMES = TRUE,
mc.preschedule = TRUE, mc.set.seed = TRUE,
mc.silent = FALSE, mc.cores = getOption("mc.cores", 2L),
mc.cleanup = TRUE, affinity.list = NULL)
mcMap(f, ...)
```
### Arguments
| | |
| --- | --- |
| `X` | a vector (atomic or list) or an expressions vector. Other objects (including classed objects) will be coerced by `[as.list](../../base/html/list)`. |
| `FUN` | the function to be applied to (`mclapply`) each element of `X` or (`mcmapply`) in parallel to `...`. |
| `f` | the function to be applied in parallel to `...`. |
| `...` | For `mclapply`, optional arguments to `FUN`. For `mcmapply` and `mcMap`, vector or list inputs: see `[mapply](../../base/html/mapply)`. |
| `MoreArgs, SIMPLIFY, USE.NAMES` | see `[mapply](../../base/html/mapply)`. |
| `mc.preschedule` | if set to `TRUE` then the computation is first divided to (at most) as many jobs are there are cores and then the jobs are started, each job possibly covering more than one value. If set to `FALSE` then one job is forked for each value of `X`. The former is better for short computations or large number of values in `X`, the latter is better for jobs that have high variance of completion time and not too many values of `X` compared to `mc.cores`. |
| `mc.set.seed` | See `<mcparallel>`. |
| `mc.silent` | if set to `TRUE` then all output on ‘stdout’ will be suppressed for all parallel processes forked (‘stderr’ is not affected). |
| `mc.cores` | The number of cores to use, i.e. at most how many child processes will be run simultaneously. The option is initialized from environment variable MC\_CORES if set. Must be at least one, and parallelization requires at least two cores. |
| `mc.cleanup` | if set to `TRUE` then all children that have been forked by this function will be killed (by sending `SIGTERM`) before this function returns. Under normal circumstances `mclapply` waits for the children to deliver results, so this option usually has only effect when `mclapply` is interrupted. If set to `FALSE` then child processes are collected, but not forcefully terminated. As a special case this argument can be set to the number of the signal that should be used to kill the children instead of `SIGTERM`. |
| `mc.allow.recursive` | Unless true, calling `mclapply` in a child process will use the child and not fork again. |
| `affinity.list` | a vector (atomic or list) containing the CPU affinity mask for each element of `X`. The CPU affinity mask describes on which CPU (core or hyperthread unit) a given item is allowed to run, see `<mcaffinity>`. To use this parameter prescheduling has to be deactivated (`mc.preschedule = FALSE`). |
### Details
`mclapply` is a parallelized version of `[lapply](../../base/html/lapply)`, provided `mc.cores > 1`: for `mc.cores == 1` (and the `affinity.list` is `NULL`) it simply calls `lapply`.
By default (`mc.preschedule = TRUE`) the input `X` is split into as many parts as there are cores (currently the values are spread across the cores sequentially, i.e. first value to core 1, second to core 2, ... (core + 1)-th value to core 1 etc.) and then one process is forked to each core and the results are collected.
Without prescheduling, a separate job is forked for each value of `X`. To ensure that no more than `mc.cores` jobs are running at once, once that number has been forked the master process waits for a child to complete before the next fork.
Due to the parallel nature of the execution random numbers are not sequential (in the random number sequence) as they would be when using `lapply`. They are sequential for each forked process, but not all jobs as a whole. See `<mcparallel>` or the package's vignette for ways to make the results reproducible with `mc.preschedule = TRUE`.
Note: the number of file descriptors (and processes) is usually limited by the operating system, so you may have trouble using more than 100 cores or so (see `ulimit -n` or similar in your OS documentation) unless you raise the limit of permissible open file descriptors (fork will fail with error `"unable to create a pipe"`).
Prior to **R** 3.4.0 and on a 32-bit platform, the [serialize](../../base/html/serialize)d result from each forked process is limited to *2^31 - 1* bytes. (Returning very large results via serialization is inefficient and should be avoided.)
`affinity.list` can be used to run elements of `X` on specific CPUs. This can be helpful, if elements of `X` have a high variance of completion time or if the hardware architecture is heterogeneous. It also enables the development of scheduling strategies for optimizing the overall runtime of parallel jobs. If `affinity.list` is set, the `mc.core` parameter is replaced with the number of CPU ids used in the affinity masks.
### Value
For `mclapply`, a list of the same length as `X` and named by `X`.
For `mcmapply`, a list, vector or array: see `[mapply](../../base/html/mapply)`.
For `mcMap`, a list.
Each forked process runs its job inside `try(..., silent = TRUE)` so if errors occur they will be stored as class `"try-error"` objects in the return value and a warning will be given. Note that the job will typically involve more than one value of `X` and hence a `"try-error"` object will be returned for all the values involved in the failure, even if not all of them failed. If any forked process is killed or fails to deliver a result for any reason, values involved in the failure will be `NULL`. To allow detection of such errors, `FUN` should not return `NULL`. As of **R** 4.0, the return value of `mcmapply` is always a list when it needs to contain `"try-error"` objects (`SIMPLIFY` is overridden to `FALSE`).
### Warning
It is *strongly discouraged* to use these functions in GUI or embedded environments, because it leads to several processes sharing the same GUI which will likely cause chaos (and possibly crashes). Child processes should never use on-screen graphics devices.
Some precautions have been taken to make this usable in `R.app` on macOS, but users of third-party front-ends should consult their documentation.
Note that tcltk counts as a GUI for these purposes since `Tcl` runs an event loop. That event loop is inhibited in a child process but there could still be problems with Tk graphical connections.
It is *strongly discouraged* to use these functions with multi-threaded libraries or packages (see `<mcfork>` for more details). If in doubt, it is safer to use a non-FORK cluster (see `[makeCluster](makecluster)`, `[clusterApply](clusterapply)`).
### Author(s)
Simon Urbanek and R Core. The `affinity.list` feature by Helena Kotthaus and Andreas Lang, TU Dortmund. Derived from the multicore package formerly on CRAN.
### See Also
`<mcparallel>`, `<pvec>`, `[parLapply](clusterapply)`, `[clusterMap](clusterapply)`.
`[simplify2array](../../base/html/lapply)` for results like `[sapply](../../base/html/lapply)`.
### Examples
```
simplify2array(mclapply(rep(4, 5), rnorm))
# use the same random numbers for all values
set.seed(1)
simplify2array(mclapply(rep(4, 5), rnorm, mc.preschedule = FALSE,
mc.set.seed = FALSE))
## Contrast this with the examples for clusterCall
library(boot)
cd4.rg <- function(data, mle) MASS::mvrnorm(nrow(data), mle$m, mle$v)
cd4.mle <- list(m = colMeans(cd4), v = var(cd4))
mc <- getOption("mc.cores", 2)
run1 <- function(...) boot(cd4, corr, R = 500, sim = "parametric",
ran.gen = cd4.rg, mle = cd4.mle)
## To make this reproducible:
set.seed(123, "L'Ecuyer")
res <- mclapply(seq_len(mc), run1)
cd4.boot <- do.call(c, res)
boot.ci(cd4.boot, type = c("norm", "basic", "perc"),
conf = 0.9, h = atanh, hinv = tanh)
## Usage of the affinity.list parameter
A <- runif(2500000,0,100)
B <- runif(2500000,0,100)
C <- runif(5000000,0,100)
first <- function(i) head(sort(i), n = 1)
# Restict all elements of X to run on CPU 1 and 2
affL <- list(c(1,2), c(1,2), c(1,2))
mclapply(list(A, A, A), first, mc.preschedule = FALSE, affinity.list = affL)
# Completion times are assumed to have a high variance
# To optimize the overall execution time elements of X are scheduled to suitable CPUs
# Assuming that the runtime for C is as long as the runtime of A plus B
# mapping: A to 1 , B to 1, C to 2
X <- list(A, B, C)
affL <- c(1, 1, 2)
mclapply(X, first, mc.preschedule = FALSE, affinity.list = affL)
```
r None
`splitIndices` Divide Tasks for Distribution in a Cluster
----------------------------------------------------------
### Description
This divides up `1:nx` into `ncl` lists of approximately equal size, as a way to allocate tasks to nodes in a cluster.
It is mainly for internal use, but some package authors have found it useful.
### Usage
```
splitIndices(nx, ncl)
```
### Arguments
| | |
| --- | --- |
| `nx` | Number of tasks. |
| `ncl` | Number of cluster nodes. |
### Value
A list of length `ncl`, each element being an integer vector.
### Examples
```
splitIndices(20, 3)
```
r None
`pvec` Parallelize a Vector Map Function using Forking
-------------------------------------------------------
### Description
`pvec` parellelizes the execution of a function on vector elements by splitting the vector and submitting each part to one core. The function must be a vectorized map, i.e. it takes a vector input and creates a vector output of exactly the same length as the input which doesn't depend on the partition of the vector.
It relies on forking and hence is not available on Windows unless `mc.cores = 1`.
### Usage
```
pvec(v, FUN, ..., mc.set.seed = TRUE, mc.silent = FALSE,
mc.cores = getOption("mc.cores", 2L), mc.cleanup = TRUE)
```
### Arguments
| | |
| --- | --- |
| `v` | vector to operate on |
| `FUN` | function to call on each part of the vector |
| `...` | any further arguments passed to `FUN` after the vector |
| `mc.set.seed` | See `<mcparallel>`. |
| `mc.silent` | if set to `TRUE` then all output on ‘stdout’ will be suppressed for all parallel processes forked (‘stderr’ is not affected). |
| `mc.cores` | The number of cores to use, i.e. at most how many child processes will be run simultaneously. Must be at least one, and at least two for parallel operation. The option is initialized from environment variable MC\_CORES if set. |
| `mc.cleanup` | See the description of this argument in `<mclapply>`. |
### Details
`pvec` parallelizes `FUN(x, ...)` where `FUN` is a function that returns a vector of the same length as `x`. `FUN` must also be pure (i.e., without side-effects) since side-effects are not collected from the parallel processes. The vector is split into nearly identically sized subvectors on which `FUN` is run. Although it is in principle possible to use functions that are not necessarily maps, the interpretation would be case-specific as the splitting is in theory arbitrary (a warning is given in such cases).
The major difference between `pvec` and `<mclapply>` is that `mclapply` will run `FUN` on each element separately whereas `pvec` assumes that `c(FUN(x[1]), FUN(x[2]))` is equivalent to `FUN(x[1:2])` and thus will split into as many calls to `FUN` as there are cores (or elements, if fewer), each handling a subset vector. This makes it more efficient than `mclapply` but requires the above assumption on `FUN`.
If `mc.cores == 1` this evaluates `FUN(v, ...)` in the current process.
### Value
The result of the computation – in a successful case it should be of the same length as `v`. If an error occurred or the function was not a map the result may be shorter or longer, and a warning is given.
### Note
Due to the nature of the parallelization, error handling does not follow the usual rules since errors will be returned as strings and results from killed child processes will show up simply as non-existent data. Therefore it is the responsibility of the user to check the length of the result to make sure it is of the correct size. `pvec` raises a warning if that is the case since it does not know whether such an outcome is intentional or not.
See `<mcfork>` for the inadvisability of using this with GUI front-ends and multi-threaded libraries.
### Author(s)
Simon Urbanek and R Core.
Derived from the multicore package formerly on CRAN.
### See Also
`<mcparallel>`, `<mclapply>`, `[parLapply](clusterapply)`, `[clusterMap](clusterapply)`.
### Examples
```
x <- pvec(1:1000, sqrt)
stopifnot(all(x == sqrt(1:1000)))
# One use is to convert date strings to unix time in large datasets
# as that is a relatively slow operation.
# So let's get some random dates first
# (A small test only with 2 cores: set options("mc.cores")
# and increase N for a larger-scale test.)
N <- 1e5
dates <- sprintf('%04d-%02d-%02d', as.integer(2000+rnorm(N)),
as.integer(runif(N, 1, 12)), as.integer(runif(N, 1, 28)))
system.time(a <- as.POSIXct(dates))
# But specifying the format is faster
system.time(a <- as.POSIXct(dates, format = "%Y-%m-%d"))
# pvec ought to be faster, but system overhead can be high
system.time(b <- pvec(dates, as.POSIXct, format = "%Y-%m-%d"))
stopifnot(all(a == b))
# using mclapply for this would much slower because each value
# will require a separate call to as.POSIXct()
# as lapply(dates, as.POSIXct) does
system.time(c <- unlist(mclapply(dates, as.POSIXct, format = "%Y-%m-%d")))
stopifnot(all(a == c))
```
| programming_docs |
r None
`writePACKAGES` Generate PACKAGES Files
----------------------------------------
### Description
Generate ‘PACKAGES’, ‘PACKAGES.gz’ and ‘PACKAGES.rds’ files for a repository of source or Mac/Windows binary packages.
### Usage
```
write_PACKAGES(dir = ".", fields = NULL,
type = c("source", "mac.binary", "win.binary"),
verbose = FALSE, unpacked = FALSE, subdirs = FALSE,
latestOnly = TRUE, addFiles = FALSE, rds_compress = "xz",
validate = FALSE)
```
### Arguments
| | |
| --- | --- |
| `dir` | Character vector describing the location of the repository (directory including source or binary packages) to generate the ‘PACKAGES’, ‘PACKAGES.gz’ and ‘PACKAGES.rds’ files from and write them to. |
| `fields` | a character vector giving the fields to be used in the ‘PACKAGES’, ‘PACKAGES.gz’ and ‘PACKAGES.rds’ files in addition to the default ones, or `NULL` (default). The default corresponds to the fields needed by `[available.packages](../../utils/html/available.packages)`: `"Package"`, `"Version"`, `"Priority"`, `"Depends"`, `"Imports"`, `"LinkingTo"`, `"Suggests"`, `"Enhances"`, `"OS_type"`, `"License"` and `"Archs"`, and those fields will always be included, plus the file name in field `"File"` if `addFile = TRUE` and the path to the subdirectory in field `"Path"` if subdirectories are used. |
| `type` | Type of packages: currently source ‘.tar.{gz,bz2,xz}’ archives, and macOS or Windows binary (‘.tgz’ or ‘.zip’, respectively) packages are supported. Defaults to `"win.binary"` on Windows and to `"source"` otherwise. |
| `verbose` | logical. Should packages be listed as they are processed? |
| `unpacked` | a logical indicating whether the package contents are available in unpacked form or not (default). |
| `subdirs` | either logical (to indicate if subdirectories should be included, recursively) or a character vector of names of subdirectories to include (which are not recursed). |
| `latestOnly` | logical: if multiple versions of a package are available should only the latest version be included? |
| `addFiles` | logical: should the filenames be included as field File in the ‘PACKAGES’ file. |
| `rds_compress` | The type of compression to be used for ‘PACKAGES.rds’: see `[saveRDS](../../base/html/readrds)`. The default is the one found to give maximal compression, and is as used on CRAN. |
| `validate` | a logical indicating whether ‘DESCRIPTION’ files should be validated, and the corresponding packages skipped in case this finds problems. |
### Details
`write_PACKAGES` scans the named directory for R packages, extracts information from each package's ‘DESCRIPTION’ file, and writes this information into the ‘PACKAGES’, ‘PACKAGES.gz’ and ‘PACKAGES.rds’ files, where the first two represent the information in DCF format, and the third serializes it via `[saveRDS](../../base/html/readrds)`.
Including non-latest versions of packages is only useful if they have less constraining version requirements, so for example `latestOnly = FALSE` could be used for a source repository when foo\_1.0 depends on R >= 2.15.0 but foo\_0.9 is available which depends on R >= 2.11.0.
Support for repositories with subdirectories and hence for `subdirs != FALSE` depends on recording a `"Path"` field in the ‘PACKAGES’ files.
Support for more general file names (e.g., other types of compression) *via* a `"File"` field in the ‘PACKAGES’ files can be used by `[download.packages](../../utils/html/download.packages)`. If the file names are not of the standard form, use `addFiles = TRUE`.
`type = "win.binary"` uses `[unz](../../base/html/connections)` connections to read all ‘DESCRIPTION’ files contained in the (zipped) binary packages for Windows in the given directory `dir`, and builds files ‘PACKAGES’, ‘PACKAGES.gz’ and ‘PACKAGES.rds’ files from this information.
For a remote repository there is a tradeoff between download speed and time spent by `[available.packages](../../utils/html/available.packages)` processing the downloaded file(s). For large repositories it is likely to be beneficial to use `rds_compress = "xz"`.
### Value
Invisibly returns the number of packages described in the resulting ‘PACKAGES’, ‘PACKAGES.gz’ and ‘PACKAGES.rds’ files. If `0`, no packages were found and no files were written.
### Note
Processing ‘.tar.gz’ archives to extract the ‘DESCRIPTION’ files is quite slow.
This function can be useful on other OSes to prepare a repository to be accessed by Windows machines, so `type = "win.binary"` should work on all OSes.
### Author(s)
Uwe Ligges and R-core.
### See Also
See `[read.dcf](../../base/html/dcf)` and `[write.dcf](../../base/html/dcf)` for reading ‘DESCRIPTION’ files and writing the ‘PACKAGES’ and ‘PACKAGES.gz’ files. See `[update\_PACKAGES](updatepackages)` for efficiently updating existing ‘PACKAGES’ and ‘PACKAGES.gz’ files.
### Examples
```
## Not run:
write_PACKAGES("c:/myFolder/myRepository") # on Windows
write_PACKAGES("/pub/RWin/bin/windows/contrib/2.9",
type = "win.binary") # on Linux
## End(Not run)
```
r None
`read.00Index` Read 00Index-style Files
----------------------------------------
### Description
Read item/description information from ‘00Index’-like files. Such files are description lists rendered in tabular form, and currently used for the ‘INDEX’ and ‘demo/00Index’ files of add-on packages.
### Usage
```
read.00Index(file)
```
### Arguments
| | |
| --- | --- |
| `file` | the name of a file to read data values from. If the specified file is `""`, then input is taken from the keyboard (in this case input can be terminated by a blank line). Alternatively, `file` can be a `[connection](../../base/html/connections)`, which will be opened if necessary, and if so closed at the end of the function call. |
### Value
A character matrix with 2 columns named `"Item"` and `"Description"` which hold the items and descriptions.
### See Also
`[formatDL](../../base/html/formatdl)` for the inverse operation of creating a 00Index-style file from items and their descriptions.
r None
`codoc` Check Code/Documentation Consistency
---------------------------------------------
### Description
Find inconsistencies between actual and documented ‘structure’ of **R** objects in a package. `codoc` compares names and optionally also corresponding positions and default values of the arguments of functions. `codocClasses` and `codocData` compare slot names of S4 classes and variable names of data sets, respectively.
### Usage
```
codoc(package, dir, lib.loc = NULL,
use.values = NULL, verbose = getOption("verbose"))
codocClasses(package, lib.loc = NULL)
codocData(package, lib.loc = NULL)
```
### Arguments
| | |
| --- | --- |
| `package` | a character string naming an installed package. |
| `dir` | a character string specifying the path to a package's root source directory. This must contain the subdirectories ‘man’ with **R** documentation sources (in Rd format) and ‘R’ with **R** code. Only used if `package` is not given. |
| `lib.loc` | a character vector of directory names of **R** libraries, or `NULL`. The default value of `NULL` corresponds to all libraries currently known. The specified library trees are used to search for `package`. |
| `use.values` | if `FALSE`, do not use function default values when comparing code and docs. Otherwise, compare *all* default values if `TRUE`, and only the ones documented in the usage otherwise (default). |
| `verbose` | a logical. If `TRUE`, additional diagnostics are printed. |
### Details
The purpose of `codoc` is to check whether the documented usage of function objects agrees with their formal arguments as defined in the **R** code. This is not always straightforward, in particular as the usage information for methods to generic functions often employs the name of the generic rather than the method.
The following algorithm is used. If an installed package is used, it is loaded (unless it is the base package), after possibly detaching an already loaded version of the package. Otherwise, if the sources are used, the **R** code files of the package are collected and sourced in a new environment. Then, the usage sections of the Rd files are extracted and parsed ‘as much as possible’ to give the formals documented. For interpreted functions in the code environment, the formals are compared between code and documentation according to the values of the argument `use.values`. Synopsis sections are used if present; their occurrence is reported if `verbose` is true.
If a package has a namespace both exported and unexported objects are checked, as well as registered S3 methods. (In the unlikely event of differences the order is exported objects in the package, registered S3 methods and finally objects in the namespace and only the first found is checked.)
Currently, the R documentation format has no high-level markup for the basic ‘structure’ of classes and data sets (similar to the usage sections for function synopses). Variable names for data frames in documentation objects obtained by suitably editing ‘templates’ created by `[prompt](../../utils/html/prompt)` are recognized by `codocData` and used provided that the documentation object is for a single data frame (i.e., only has one alias). `codocClasses` analogously handles slot names for classes in documentation objects obtained by editing shells created by `[promptClass](../../methods/html/promptclass)`.
Help files named ‘pkgname-defunct.Rd’ for the appropriate pkgname are checked more loosely, as they may have undocumented arguments.
### Value
`codoc` returns an object of class `"codoc"`. Currently, this is a list which, for each Rd object in the package where an inconsistency was found, contains an element with a list of the mismatches (which in turn are lists with elements `code` and `docs`, giving the corresponding arguments obtained from the function's code and documented usage).
`codocClasses` and `codocData` return objects of class `"codocClasses"` and `"codocData"`, respectively, with a structure similar to class `"codoc"`.
There are `print` methods for nicely displaying the information contained in such objects.
### Note
The default for `use.values` has been changed from `FALSE` to `NULL`, for **R** versions 1.9.0 and later.
### See Also
`<undoc>`, `[QC](qc)`
r None
`check_packages_in_dir` Check Source Packages and Their Reverse Dependencies
-----------------------------------------------------------------------------
### Description
Check source packages in a given directory, optionally with their reverse dependencies.
### Usage
```
check_packages_in_dir(dir,
check_args = character(),
check_args_db = list(),
reverse = NULL,
check_env = character(),
xvfb = FALSE,
Ncpus = getOption("Ncpus", 1L),
clean = TRUE,
...)
summarize_check_packages_in_dir_results(dir, all = TRUE,
full = FALSE, ...)
summarize_check_packages_in_dir_timings(dir, all = FALSE,
full = FALSE)
summarize_check_packages_in_dir_depends(dir, all = FALSE,
which = c("Depends",
"Imports",
"LinkingTo"))
check_packages_in_dir_changes(dir, old,
outputs = FALSE, sources = FALSE, ...)
check_packages_in_dir_details(dir, logs = NULL, drop_ok = TRUE, ...)
```
### Arguments
| | |
| --- | --- |
| `dir` | a character string giving the path to the directory with the source ‘.tar.gz’ files to be checked. |
| `check_args` | a character vector with arguments to be passed to `R CMD check`, or a list of length two of such character vectors to be used for checking packages and reverse dependencies, respectively. |
| `check_args_db` | a named list of character vectors with arguments to be passed to `R CMD check`, with names the respective package names. |
| `reverse` | a list with names partially matching `"repos"`, `"which"`, or `"recursive"`, giving the repositories to use for locating reverse dependencies (a subset of `getOption("repos")`, the default), the types of reverse dependencies (default: `c("Depends", "Imports", "LinkingTo")`, with shorthands `"most"` and `"all"` as for `<package_dependencies>`), and indicating whether to also check reverse dependencies of reverse dependencies and so on (default: `FALSE`), or `NULL` (default), in which case no reverse dependencies are checked. |
| `check_env` | a character vector of name=value strings to set environment variables for checking, or a list of length two of such character vectors to be used for checking packages and reverse dependencies, respectively. |
| `xvfb` | a logical indicating whether to perform checking inside a virtual framebuffer X server (Unix only), or a character vector of Xvfb options for doing so. |
| `Ncpus` | the number of parallel processes to use for parallel installation and checking. |
| `clean` | a logical indicating whether to remove the downloaded reverse dependency sources. |
| `...` | passed to `[readLines](../../base/html/readlines)`, e.g. for reading log files produced in a different encoding; currently not used by `check_packages_in_dir`. |
| `all` | a logical indicating whether to also summarize the reverse dependencies checked. |
| `full` | a logical indicating whether to also give details for checks with non-ok results, or summarize check example timings (if available). |
| `which` | see `<package_dependencies>`. |
| `old` | a character string giving the path to the directory of a previous `check_packages_in_dir` run. |
| `outputs` | a logical indicating whether to analyze changes in the outputs of the checks performed, or only (default) the status of the checks. |
| `sources` | a logical indicating whether to also investigate the changes in the source files checked (default: `FALSE`). |
| `logs` | a character vector with the paths of ‘00check.log’ to analyze. Only used if `dir` was not given. |
| `drop_ok` | a logical indicating whether to drop checks with ‘ok’ status, or a character vector with the ‘ok’ status tags to drop. The default corresponds to tags OK, NONE and SKIPPED. |
### Details
`check_packages_in_dir` allows to conveniently check source package ‘.tar.gz’ files in the given directory `dir`, along with their reverse dependencies as controlled by `reverse`.
The `"which"` component of `reverse` can also be a list, in which case reverse dependencies are obtained for each element of the list and the corresponding element of the `"recursive"` component of `reverse` (which is recycled as needed).
If needed, the source ‘.tar.gz’ files of the reverse dependencies to be checked as well are downloaded into `dir` (and removed at the end if `clean` is true). Next, all packages (additionally) needed for checking are installed to the ‘Library’ subdirectory of `dir`. Then, all ‘.tar.gz’ files are checked using the given arguments and environment variables, with outputs and messages to files in the ‘Outputs’ subdirectory of `dir`. The ‘\*.Rcheck’ directories with the check results of the reverse dependencies are renamed by prefixing their base names with rdepends\_.
Results and timings can conveniently be summarized using `summarize_check_packages_in_dir_results` and `summarize_check_packages_in_dir_timings`, respectively.
Installation and checking is performed in parallel if `Ncpus` is greater than one: this will use `[mclapply](../../parallel/html/mclapply)` on Unix and `[parLapply](../../parallel/html/clusterapply)` on Windows.
`check_packages_in_dir` returns an object inheriting from class `"check_packages_in_dir"` which has `[print](../../base/html/print)` and `[summary](../../base/html/summary)` methods.
`check_packages_in_dir_changes` allows to analyze the effect of changing (some of) the sources. With `dir` and `old` the paths to the directories with the new and old sources, respectively, and the corresponding check results, possible changes in the check results can conveniently be analyzed as controlled via options `outputs` and `sources`. The changes object returned can be subscripted according to change in severity from the old to the new results by using one of `"=="`, `"!="`, `"<"`, `"<="`, `">"` or `">="` as row index.
`check_packages_in_dir_details` analyzes check log files to obtain check details as a data frame which can be used for further processing, providing check name, status and output for every check performed and not dropped according to status tag (via variables `Check`, `Status` and `Output`, respectively).
Environment variable \_R\_CHECK\_ELAPSED\_TIMEOUT\_ can be used to set a limit on the elapsed time of each `check` run. See the ‘R Internals’ manual for how the value is interpreted and for other environment variables which can be used for finer-grained control on timeouts within a `check` run.
### Note
This functionality is still experimental: interfaces may change in future versions.
### Examples
```
## Not run:
## Check packages in dir without reverse dependencies:
check_packages_in_dir(dir)
## Check packages in dir and their reverse dependencies using the
## defaults (all repositories in getOption("repos"), all "strong"
## reverse dependencies, no recursive reverse dependencies):
check_packages_in_dir(dir, reverse = list())
## Check packages in dir with their reverse dependencies from CRAN,
## using all strong reverse dependencies and reverse suggests:
check_packages_in_dir(dir,
reverse = list(repos = getOption("repos")["CRAN"],
which = "most"))
## Check packages in dir with their reverse dependencies from CRAN,
## using '--as-cran' for the former but not the latter:
check_packages_in_dir(dir,
check_args = c("--as-cran", ""),
reverse = list(repos = getOption("repos")["CRAN"]))
## End(Not run)
```
r None
`find_gs_cmd` Find a GhostScript Executable
--------------------------------------------
### Description
Find a GhostScript executable in a cross-platform way.
### Usage
```
find_gs_cmd(gs_cmd = "")
```
### Arguments
| | |
| --- | --- |
| `gs_cmd` | The name, full or partial path of a GhostScript executable. |
### Details
The details differ by platform.
On a Unix-alike, the GhostScript executable is usually called `gs`. The name (and possibly path) of the command is taken first from argument `gs_cmd` then from the environment variable R\_GSCMD and default `gs`. This is then looked for on the system path and the value returned if a match is found.
On Windows, the name of the command is taken from argument `gs_cmd` then from the environment variables R\_GSCMD and GSC. If neither of those produces a suitable command name, `gswin64c` and `gswin32c` are tried in turn. In all cases the command is looked for on the system PATH.
Note that on Windows (and some other OSes) there are separate GhostScript executables to display Postscript/PDF files and to manipulate them: this function looks for the latter.
### Value
A character string giving the full path to a GhostScript executable if one was found, otherwise an empty string.
### Examples
```
## Not run:
## Suppose a Solaris system has GhostScript 9.00 on the path and
## 9.07 in /opt/csw/bin. Then one might set
Sys.setenv(R_GSCMD = "/opt/csw/bin/gs")
## End(Not run)
```
r None
`QC` QC Checks for R Code and/or Documentation
-----------------------------------------------
### Description
Functions for performing various quality control (QC) checks on R code and documentation, notably on R packages.
### Usage
```
checkDocFiles (package, dir, lib.loc = NULL, chkInternal = FALSE)
checkDocStyle (package, dir, lib.loc = NULL)
checkReplaceFuns(package, dir, lib.loc = NULL)
checkS3methods (package, dir, lib.loc = NULL)
checkRdContents (package, dir, lib.loc = NULL, chkInternal = FALSE)
langElts
nonS3methods(package)
```
### Arguments
| | |
| --- | --- |
| `package` | a character string naming an installed package. |
| `dir` | a character string specifying the path to a package's root source (or *installed* in some cases) directory. This should contain the subdirectories ‘R’ (for R code) and ‘man’ with **R** documentation sources (in Rd format). Only used if `package` is not given. |
| `lib.loc` | a character vector of directory names of **R** libraries, or `NULL`. The default value of `NULL` corresponds to all libraries currently known. The specified library trees are used to search for `package`. |
| `chkInternal` | logical indicating if Rd files marked with keyword `internal` should be checked as well. |
### Details
`checkDocFiles` checks, for all Rd files in a package, whether all arguments shown in the usage sections of the Rd file are documented in its arguments section. It also reports duplicated entries in the arguments section, and ‘over-documented’ arguments which are given in the arguments section but not in the usage. Note that the match is for the usage section and not a possibly existing synopsis section, as the usage is what gets displayed.
`checkDocStyle` investigates how (S3) methods are shown in the usages of the Rd files in a package. It reports the methods shown by their full name rather than using the Rd `\method` markup for indicating S3 methods. Earlier versions of **R** also reported about methods shown along with their generic, which typically caused problems for the documentation of the primary argument in the generic and its methods. With `\method` now being expanded in a way that class information is preserved, joint documentation is no longer necessarily a problem. (The corresponding information is still contained in the object returned by `checkDocStyle`.)
`checkReplaceFuns` checks whether replacement functions or S3/S4 replacement methods in the package R code have their final argument named `value`.
`checkS3methods` checks whether all S3 methods defined in the package R code have all arguments of the corresponding generic, with positional arguments of the generics in the same positions for the method. As an exception, the first argument of a formula method *may* be called `formula` even if this is not the name used by the generic. The rules when `...` is involved are subtle: see the source code. Functions recognized as S3 generics are those with a call to `UseMethod` in their body, internal S3 generics (see [InternalMethods](../../base/html/internalmethods)), and S3 group generics (see `[Math](../../base/html/groupgeneric)`). Possible dispatch under a different name is not taken into account. The generics are sought first in the given package, then in the base package and (currently) the packages graphics, stats, and utils added in R 1.9.0 by splitting the former base, and, if an installed package is tested, also in the loaded namespaces/packages listed in the package's ‘DESCRIPTION’ Depends field.
`checkRdContents()` checks Rd content, e.g., whether arguments of functions in the usage section have non empty descriptions.
`nonS3methods(package)` returns a `[character](../../base/html/character)` vector with the names of the functions in `package` which ‘look’ like S3 methods, but are not. Using `package = NULL` returns all known examples.
`langElts` is a character vector of names of “language elements” of **R**. These are implemented as “very primitive” functions (no argument list; `[print](../../base/html/print)()`ing as `.Primitive("<name>")`).
If using an installed package, the checks needing access to all **R** objects of the package will load the package (unless it is the base package), after possibly detaching an already loaded version of the package.
### Value
The functions return objects of class the same as the respective function names containing the information about problems detected. There are `print` methods for nicely displaying the information contained in such objects.
| programming_docs |
r None
`checkRd` Check an Rd Object
-----------------------------
### Description
Check an help file or the output of the `[parse\_Rd](parse_rd)` function.
### Usage
```
checkRd(Rd, defines = .Platform$OS.type, stages = "render",
unknownOK = TRUE, listOK = TRUE, ..., def_enc = FALSE)
```
### Arguments
| | |
| --- | --- |
| `Rd` | a filename or `Rd` object to use as input. |
| `defines` | string(s) to use in `#ifdef` tests. |
| `stages` | at which stage (`"build"`, `"install"`, or `"render"`) should `\Sexpr` macros be executed? See the notes below. |
| `unknownOK` | unrecognized macros are treated as errors if `FALSE`, otherwise warnings. |
| `listOK` | unnecessary non-empty braces (e.g., around text, not as an argument) are treated as errors if `FALSE`, otherwise warnings. |
| `...` | additional parameters to pass to `[parse\_Rd](parse_rd)` when `Rd` is a filename. One that is often useful is `encoding`. |
| `def_enc` | logical: has the package declared an encoding, so tests for non-ASCII text are suppressed? |
### Details
`checkRd` performs consistency checks on an Rd file, confirming that required sections are present, etc.
It accepts a filename for an Rd file, and will use `[parse\_Rd](parse_rd)` to parse it before applying the checks. If so, warnings from `parse_Rd` are collected, together with those from the internal function `prepare_Rd`, which does the `#ifdef` and `\Sexpr` processing, drops sections that would not be rendered or are duplicated (and should not be) and removes empty sections.
An Rd object is passed through `prepare_Rd`, but it may already have been (and installed Rd objects have).
Warnings are given a ‘level’: those from `prepare_Rd` have level 0. These include
| |
| --- |
| All text must be in a section |
| Only one tag name section is allowed: the first will be used |
| Section name is unrecognized and will be dropped |
| Dropping empty section name |
| |
`checkRd` itself can show
| | |
| --- | --- |
| 7 | Tag tag name not recognized |
| 7 | `\tabular` format must be simple text |
| 7 | Unrecognized `\tabular` format: ... |
| 7 | Only n columns allowed in this table |
| 7 | Must have a tag name |
| 7 | Only one tag name is allowed |
| 7 | Tag tag name must not be empty |
| 7 | `\docType` must be plain text |
| 5 | Tag `\method` is only valid in `\usage` |
| 5 | Tag `\dontrun` is only valid in `\examples` |
| 5 | Tag tag name is invalid in a block name block |
| 5 | Title of `\section` must be non-empty plain text |
| 5 | `\title` content must be plain text |
| 3 | Empty section tag name |
| -1 | Non-ASCII contents without declared encoding |
| -1 | Non-ASCII contents in second part of `\enc` |
| -3 | Tag `\ldots` is not valid in a code block |
| -3 | Apparent non-ASCII contents without declared encoding |
| -3 | Apparent non-ASCII contents in second part of `\enc` |
| -3 | Unnecessary braces at ... |
| -3 | `\method` not valid outside a code block |
| |
and variations with `\method` replaced by `\S3method` or `\S4method`.
Note that both `prepare_Rd` and `checkRd` have tests for an empty section: that in `checkRd` is stricter (essentially that nothing is output).
### Value
This may fail through an **R** error, but otherwise warnings are collected as returned as an object of class `"checkRd"`, a character vector of messages. This class has a `print` method which only prints unique messages, and has argument `minlevel` that can be used to select only more serious messages. (This is set to `-1` in `R CMD check`.)
Possible fatal errors are those from running the parser (e.g., a non-existent file, unclosed quoted string, non-ASCII input without a specified encoding) or from `prepare_Rd` (multiple `\Rdversion` declarations, invalid `\encoding` or `\docType` or `\name` sections, and missing or duplicate `\name` or `\title` sections).
### Author(s)
Duncan Murdoch, Brian Ripley
### See Also
`[parse\_Rd](parse_rd)`, `[Rd2HTML](rd2html)`.
r None
`Rdutils` Rd Utilities
-----------------------
### Description
Utilities for computing on the information in Rd objects.
### Usage
```
Rd_db(package, dir, lib.loc = NULL, stages = "build")
```
### Arguments
| | |
| --- | --- |
| `package` | a character string naming an installed package. |
| `dir` | a character string specifying the path to a package's root source directory. This should contain the subdirectory ‘man’ with **R** documentation sources (in Rd format). Only used if `package` is not given. |
| `lib.loc` | a character vector of directory names of **R** libraries, or `NULL`. The default value of `NULL` corresponds to all libraries currently known. The specified library trees are used to search for `package`. |
| `stages` | if `dir` is specified and the database is being built from source, which stages of `\Sexpr` processing should be processed? |
### Details
`Rd_db` builds a simple database of all Rd objects in a package, as a list of the results of running `[parse\_Rd](parse_rd)` on the Rd source files in the package and processing platform conditionals and some `\Sexpr` macros.
### See Also
`[parse\_Rd](parse_rd)`
### Examples
```
## Build the Rd db for the (installed) base package.
db <- Rd_db("base")
## Keyword metadata per Rd object.
keywords <- lapply(db, tools:::.Rd_get_metadata, "keyword")
## Tabulate the keyword entries.
kw_table <- sort(table(unlist(keywords)))
## The 5 most frequent ones:
rev(kw_table)[1 : 5]
## The "most informative" ones:
kw_table[kw_table == 1]
## Concept metadata per Rd file.
concepts <- lapply(db, tools:::.Rd_get_metadata, "concept")
## How many files already have \concept metadata?
sum(sapply(concepts, length) > 0)
## How many concept entries altogether?
length(unlist(concepts))
```
r None
`update_pkg_po` Prepare Translations for a Package
---------------------------------------------------
### Description
Prepare the ‘po’ directory of a package and compile and install the translations.
### Usage
```
update_pkg_po(pkgdir, pkg = NULL, version = NULL, copyright, bugs)
```
### Arguments
| | |
| --- | --- |
| `pkgdir` | The path to the package directory. |
| `pkg` | The package name: if `NULL` it is read from the package's ‘DESCRIPTION’ file. |
| `version` | The package version: if `NULL` it is read from the package's ‘DESCRIPTION’ file. |
| `copyright, bugs` | optional character strings for the Copyright and Report-Msgid-Bugs-To details in the template files. |
### Details
This performs a series of steps to prepare or update messages in the package.
* If the package sources do not already have a ‘po’ directory, one is created.
* `[xgettext2pot](xgettext)` is called to create/update a file ‘po/R-pkgname.pot’ containing the translatable messages in the package.
* All existing files in directory `po` with names ‘R-lang.po’ are updated from ‘R-pkgname.pot’, `[checkPoFile](checkpofiles)` is called on the updated file, and if there are no problems the file is compiled and installed under ‘inst/po’.
* In a UTF-8 locale, a ‘translation’ ‘[email protected]’ is created with UTF-8 directional quotes, compiled and installed under ‘inst/po’.
* The remaining steps are done only if file ‘po/pkgname.pot’ already exists. The ‘src/\*.{c,cc,cpp,m,mm}’ files in the package are examined to create a file ‘po/pkgname.pot’ containing the translatable messages in the C/C++ files. If there is a `src/windows` directory, files within it are also examined.
* All existing files in directory `po` with names ‘lang.po’ are updated from ‘pkgname.pot’, `[checkPoFile](checkpofiles)` is called on the updated file, and if there are no problems the file is compiled and installed under ‘inst/po’.
* In a UTF-8 locale, a ‘translation’ ‘[email protected]’ is created with UTF-8 directional quotes, compiled and installed under ‘inst/po’.
Note that C/C++ messages are not automatically prepared for translation as they need to be explicitly marked for translation in the source files. Once that has been done, create an empty file ‘po/pkgname.pot’ in the package sources and run this function again.
`pkg = "base"` is special (and for use by **R** developers only): the C files are not in the package directory but in the main sources.
### System requirements
This function requires the following tools from the GNU `gettext-tools`: `xgettext`, `msgmerge`, `msgfmt`, `msginit` and `msgconv`. These are part of most Linux distributions and easily compiled from the sources on Unix-alikes (including macOS). Pre-compiled versions for Windows are available in <https://www.stats.ox.ac.uk/pub/Rtools/goodies/gettext-tools.zip>.
It will probably not work correctly for `en@quot` translations except in a UTF-8 locale, so these are skipped elsewhere.
### See Also
`[xgettext2pot](xgettext)`.
r None
`encoded` Translate non-ASCII Text to LaTeX Escapes
----------------------------------------------------
### Description
Translate non-ASCII characters in text to LaTeX escape sequences.
### Usage
```
encoded_text_to_latex(x,
encoding = c("latin1", "latin2", "latin9",
"UTF-8", "utf8"))
```
### Arguments
| | |
| --- | --- |
| `x` | a character vector. |
| `encoding` | the encoding to be assumed. `"latin9"` is officially ISO-8859-15 or Latin-9, but known as latin9 to LaTeX's `inputenc` package. |
### Details
Non-ASCII characters in `x` are replaced by an appropriate LaTeX escape sequence, or ? if there is no appropriate sequence.
Even if there is an appropriate sequence, it may not be supported by the font in use. Hyphen is mapped to \-.
### Value
A character vector of the same length as `x`.
### See Also
`[iconv](../../base/html/iconv)`
### Examples
```
x <- "fa\xE7ile"
encoded_text_to_latex(x, "latin1")
## Not run:
## create a tex file to show the upper half of 8-bit charsets
x <- rawToChar(as.raw(160:255), multiple = TRUE)
(x <- matrix(x, ncol = 16, byrow = TRUE))
xx <- x
xx[] <- encoded_text_to_latex(x, "latin1") # or latin2 or latin9
xx <- apply(xx, 1, paste, collapse = "&")
con <- file("test-encoding.tex", "w")
header <- c(
"\\documentclass{article}",
"\\usepackage[T1]{fontenc}",
"\\usepackage{Rd}",
"\\begin{document}",
"\\HeaderA{test}{}{test}",
"\\begin{Details}\relax",
"\\Tabular{cccccccccccccccc}{")
trailer <- c("}", "\\end{Details}", "\\end{document}")
writeLines(header, con)
writeLines(paste0(xx, "\\"), con)
writeLines(trailer, con)
close(con)
## and some UTF_8 chars
x <- intToUtf8(as.integer(
c(160:383,0x0192,0x02C6,0x02C7,0x02CA,0x02D8,
0x02D9, 0x02DD, 0x200C, 0x2018, 0x2019, 0x201C,
0x201D, 0x2020, 0x2022, 0x2026, 0x20AC)),
multiple = TRUE)
x <- matrix(x, ncol = 16, byrow = TRUE)
xx <- x
xx[] <- encoded_text_to_latex(x, "UTF-8")
xx <- apply(xx, 1, paste, collapse = "&")
con <- file("test-utf8.tex", "w")
writeLines(header, con)
writeLines(paste(xx, "\\", sep = ""), con)
writeLines(trailer, con)
close(con)
## End(Not run)
```
r None
`startDynamicHelp` Start the Dynamic HTML Help System
------------------------------------------------------
### Description
This function starts the internal help server, so that HTML help pages are rendered when requested.
### Usage
```
startDynamicHelp(start = TRUE)
```
### Arguments
| | |
| --- | --- |
| `start` | logical: whether to start or shut down the dynamic help system. If `NA`, the server is started if not already running. |
### Details
This function starts the internal HTTP server, which runs on the loopback interface (`127.0.0.1`). If `options("help.ports")` is set to a vector of non-zero integer values, `startDynamicHelp` will try those ports in order; otherwise, it tries up to 10 random ports to find one not in use. It can be disabled by setting the environment variable R\_DISABLE\_HTTPD to a non-empty value or `options("help.ports")` to `0`.
`startDynamicHelp` is called by functions that need to use the server, so would rarely be called directly by a user.
Note that `options(help_type = "html")` must be set to actually make use of HTML help, although it might be the default for an **R** installation.
If the server cannot be started or is disabled, `[help.start](../../utils/html/help.start)` will be unavailable and requests for HTML help will give text help (with a warning).
The browser in use does need to be able to connect to the loopback interface: occasionally it is set to use a proxy for HTTP on all interfaces, which will not work – the solution is to add an exception for `127.0.0.1`.
### Value
The chosen port number is returned invisibly (which will be `0` if the server has been stopped).
### See Also
`[help.start](../../utils/html/help.start)` and `[help](../../utils/html/help)(help_type = "html")` will attempt to start the HTTP server if required
`[Rd2HTML](rd2html)` is used to render the package help pages.
r None
`Rdiff` Difference R Output Files
----------------------------------
### Description
Given two **R** output files, compute differences ignoring headers, footers and some other differences.
### Usage
```
Rdiff(from, to, useDiff = FALSE, forEx = FALSE,
nullPointers = TRUE, Log = FALSE)
```
### Arguments
| | |
| --- | --- |
| `from, to` | filepaths to be compared |
| `useDiff` | should `diff` always be used to compare results? |
| `forEx` | logical: extra pruning for ‘-Ex.Rout’ files to exclude the header. |
| `nullPointers` | logical: should the displayed addresses of pointers be set to `0x00000000` before comparison? |
| `Log` | logical: should the returned value include a log of differences found? |
### Details
The **R** startup banner and any timing information from `R CMD
BATCH` are removed from both files, together with lines about loading packages. UTF-8 fancy quotes (see `[sQuote](../../base/html/squote)`) and on Windows, Windows' so-called ‘smart quotes’, are mapped to a simple quote. Addresses of environments, compiled bytecode and other exotic types expressed as hex addresses (e.g., `<environment: 0x12345678>`) are mapped to `0x00000000`. The files are then compared line-by-line. If there are the same number of lines and `useDiff` is false, a simple `diff -b` -like display of differences is printed (which ignores trailing spaces and differences in numbers of consecutive spaces), otherwise `diff -bw` is called on the edited files. (This tries to ignore all differences in whitespace: note that flag -w is not required by POSIX but is supported by GNU, Solaris and FreeBSD versions.)
This can compare uncompressed PDF files, ignoring differences in creation and modification dates.
Mainly for use in examples, text from marker > ## IGNORE\_RDIFF\_BEGIN up to (but not including) > ## IGNORE\_RDIFF\_END is ignored.
### Value
If `Log` is true, a list with components `status` (see below) and `out`, a character vector of descriptions of differences, possibly of zero length.
Otherwise, a status indicator, `0L` if and only if no differences were found.
### See Also
The shell script run as `R CMD Rdiff`.
r None
`delimMatch` Delimited Pattern Matching
----------------------------------------
### Description
Match delimited substrings in a character vector, with proper nesting.
### Usage
```
delimMatch(x, delim = c("{", "}"), syntax = "Rd")
```
### Arguments
| | |
| --- | --- |
| `x` | a character vector. |
| `delim` | a character vector of length 2 giving the start and end delimiters. Future versions might allow for arbitrary regular expressions. |
| `syntax` | currently, always the string `"Rd"` indicating Rd syntax (i.e., % starts a comment extending till the end of the line, and \ escapes). Future versions might know about other syntax, perhaps via ‘syntax tables’ allowing to flexibly specify comment, escape, and quote characters. |
### Value
An integer vector of the same length as `x` giving the starting position (in characters) of the first match, or *-1* if there is none, with attribute `"match.length"` giving the length (in characters) of the matched text (or *-1* for no match).
### See Also
`[regexpr](../../base/html/grep)` for ‘simple’ pattern matching.
### Examples
```
x <- c("\\value{foo}", "function(bar)")
delimMatch(x)
delimMatch(x, c("(", ")"))
```
r None
`toRd` Generic Function to Convert Object to a Fragment of Rd Code
-------------------------------------------------------------------
### Description
Methods for this function render their associated classes as a fragment of Rd code, which can then be rendered into text, HTML, or LaTeX.
### Usage
```
toRd(obj, ...)
## S3 method for class 'bibentry'
toRd(obj, style = NULL, ...)
```
### Arguments
| | |
| --- | --- |
| `obj` | The object to be rendered. |
| `style` | The style to be used in converting a `[bibentry](../../utils/html/bibentry)` object. |
| `...` | Additional arguments used by methods. |
### Details
See `<bibstyle>` for a discussion of styles. The default `style = NULL` value gives the default style.
### Value
Returns a character vector containing a fragment of Rd code that could be parsed and rendered. The default method converts `obj` to mode `character`, then escapes any Rd markup within it. The `bibentry` method converts an object of that class to markup appropriate for use in a bibliography.
r None
`updatePACKAGES` Update Existing PACKAGES Files
------------------------------------------------
### Description
Update an existing repository by reading the `PACKAGES` file, retaining entries which are still valid, removing entries which are no longer valid, and only processing built package tarballs which do not match existing entries.
`update_PACKAGES` can be much faster than `[write\_PACKAGES](writepackages)` for small-moderate changes to large repository indexes, particularly in non-strict mode (see Details).
### Usage
```
update_PACKAGES(dir = ".", fields = NULL, type = c("source",
"mac.binary", "win.binary"), verbose.level = as.integer(dryrun),
latestOnly = TRUE, addFiles = FALSE, rds_compress = "xz",
strict = TRUE, dryrun = FALSE)
```
### Arguments
| | |
| --- | --- |
| `dir` | See `[write\_PACKAGES](writepackages)` |
| `fields` | See `[write\_PACKAGES](writepackages)` |
| `type` | See `[write\_PACKAGES](writepackages)` |
| `verbose.level` | (0, 1, 2) What level of informative messages which should be displayed throughout the process. Defaults to 0 if `dryrun` is `FALSE` (the default) and 1 otherwise. See details for more information. |
| `latestOnly` | See `[write\_PACKAGES](writepackages)` |
| `addFiles` | See `[write\_PACKAGES](writepackages)` |
| `rds_compress` | See `[write\_PACKAGES](writepackages)` |
| `strict` | logical. Should 'strict mode' be used when checking existing `PACKAGES` entries. See details. Defaults to `TRUE`. |
| `dryrun` | logical. Should the updates to existing `PACKAGES` files be computed but NOT applied. Defaults to `FALSE`. |
### Details
Throughout this section, *package tarball* is defined to mean any archive file in `dir` whose name can be interpreted as `<package>_<version>.<ext>` - with `<ext>` the appropriate extension for built packages of type `type` - (or that is pointed to by the `File` field of an existing `PACKAGES` entry). *Novel package tarballs* are those which do not match an existing `PACKAGES` file entry.
`update_PACKAGES` calls directly down to `[write\_PACKAGES](writepackages)` with a warning (and thus all package tarballs will be processed), if any of the following conditions hold:
* `type` is `win.binary` and `strict` is `TRUE` (no MD5 checksums are included in win.binary `PACKAGES` files)
* No `PACKAGES` file exists under `dir`
* A `PACKAGES` file exists under `dir` but is empty
* `fields` is not `NULL` and one or more specified fields are not present in the existing `PACKAGES` file
`update_PACKAGES` avoids (re)processing package tarballs in cases where a `PACKAGES` file entry already exists and appears to remain valid. The logic for detecting still-valid entries is as follows:
Any package tarball which was last modified more recently than the existing `PACKAGES` file is considered novel; existing `PACKAGES` entries appearing to correspond to such tarballs are *always* considered stale and replaced by newly generated ones. Similarly, all `PACKAGES` entries that do not correspond to any package tarball found in `dir` are considered invalid and are excluded from the resulting updated `PACKAGES` files.
When `strict` is `TRUE`, `PACKAGES` entries that match a package tarball (by package name and version) are confirmed via MD5 checksum; only those that pass are retained as valid. All novel package tarballs are fully processed by the standard machinery underlying `[write\_PACKAGES](writepackages)` and the resulting entries are added. Finally, if `latestOnly` is `TRUE`, package-version pruning is performed across the entries.
When `strict` is `FALSE`, package tarballs are assumed to encode correct metadata in their filenames. `PACKAGES` entries which appear to match a package tarball are retained as valid (No MD5 checksum testing occurs). If `latestOnly` is `TRUE`, package-version pruning is performed across the full set of retained entries and novel package tarballs *before* the processing of the novel tarballs, at significant computational and time savings in some situations. After the optional pruning, any relevant novel package tarballs are processed via the standard machinery and added to the set of retained entries.
In both cases, after the above process concludes, entries are sorted alphabetically by the string concatenation of `Package` and `Version`. This should match the entry order `write_PACKAGES` outputs.
The fields within the entries are ordered as follows: canonical fields - i.e., those appearing as columns when `available.packages` is called on a CRAN mirror - appear first in their canonical order, followed by any non-canonical fields.
After entry and field reordering, the final database of `PACKAGES` entries is written to all three `PACKAGES` files, overwriting the existing versions.
When `verbose.level` is `0`, no extra messages are displayed to the user. When it is `1`, detailed information about what is happening is conveyed via messages, but underlying machinery from `[write\_PACKAGES](writepackages)` is invoked with `verbose = FALSE`. Behavior when `verbose.level` is `2` is identical to `verbose.level` `1` with the exception that underlying machinery from `write_PACKAGE` is invoked with `verbose = TRUE`, which will individually list every processed tarball.
### Note
While both strict and non-strict modes can offer speedups when updating small percentages of large repositories, non-strict mode is *much* faster and is recommended in situations where the assumption it makes about tarballs' filenames encoding accurate information is safe.
### Note
Users should expect significantly smaller speedups over `write_PACKAGES` in the `type == "win.binary"` case on at least some operating systems. This is due to `write_PACKAGES` being significantly faster in this context, rather than `update_PACKAGES` being slower.
### Author(s)
Gabriel Becker (adapted from previous, related work by him in the `switchr` package which is copyright Genentech, Inc.)
### See Also
[write\_PACKAGES](writepackages)
### Examples
```
## Not run:
write_PACKAGES("c:/myFolder/myRepository") # on Windows
update_PACKAGES("c:/myFolder/myRepository") # on Windows
write_PACKAGES("/pub/RWin/bin/windows/contrib/2.9",
type = "win.binary") # on Linux
update_PACKAGES("/pub/RWin/bin/windows/contrib/2.9",
type = "win.binary") # on Linux
## End(Not run)
```
| programming_docs |
r None
`buildVignettes` List and Build Package Vignettes
--------------------------------------------------
### Description
Run `[Sweave](../../utils/html/sweave)` (or other custom weave function) and `[texi2pdf](texi2dvi)` on all vignettes of a package, or list the vignettes.
### Usage
```
buildVignettes(package, dir, lib.loc = NULL, quiet = TRUE,
clean = TRUE, tangle = FALSE, ser_elibs = NULL)
pkgVignettes(package, dir, subdirs = NULL, lib.loc = NULL,
output = FALSE, source = FALSE, check = FALSE)
```
### Arguments
| | |
| --- | --- |
| `package` | a character string naming an installed package. If given, vignette source files are by default looked for in subdirectory ‘doc’. |
| `dir` | a character string specifying the path to a package's root source directory. If given, vignette source files are by default looked for in subdirectory ‘vignettes’. |
| `lib.loc` | a character vector of directory names of **R** libraries, or `NULL`. The default value of `NULL` corresponds to all libraries currently known. The specified library trees are used to search for `package`. |
| `quiet` | logical. Weave and run `[texi2pdf](texi2dvi)` in quiet mode. |
| `clean` | Remove all files generated by the build, even if there were copies there before. |
| `tangle` | logical. Do tangling as well as weaving. |
| `ser_elibs` | For use from `R CMD check`. |
| `subdirs` | a character vector of subdirectories of `dir` in which to look for vignettes. The first which exists is used. Defaults to `"doc"` if `package` is supplied, otherwise `"vignettes"`. |
| `output` | logical indicating if the output filenames for each vignette should be returned (in component `outputs`). |
| `source` | logical indicating if the *tangled* output filenames for each vignette should be returned (in component `sources`). |
| `check` | logical. If `TRUE`, check whether all files that have vignette-like filenames have an identifiable vignette engine. This may be a false positive if a file is not a vignette but has a filename matching a pattern defined by one of the vignette engines. |
### Details
`buildVignettes` is used by `R CMD build` and `R CMD check` to (re-)build vignette outputs from their sources.
As from **R** 3.4.1, both of these functions ignore files that are listed in the ‘.Rbuildignore’ file in `dir`.
### Value
`buildVignettes` is called for its side effect of creating the outputs of all vignettes, and if `tangle = TRUE`, extracting the **R** code.
`pkgVignettes` returns an object of class `"pkgVignettes"` if a vignette directory is found, otherwise `NULL`.
### Examples
```
gVigns <- pkgVignettes("grid")
str(gVigns)
```
r None
`buildVignette` Build One Vignette
-----------------------------------
### Description
Run `[Sweave](../../utils/html/sweave)` (or other custom weave function), `[texi2pdf](texi2dvi)`, and/or `[Stangle](../../utils/html/sweave)` (or other custom tangle function) on one vignette.
This is the workhorse of `R CMD Sweave`.
### Usage
```
buildVignette(file, dir = ".", weave = TRUE, latex = TRUE, tangle = TRUE,
quiet = TRUE, clean = TRUE, keep = character(),
engine = NULL, buildPkg = NULL, encoding, ...)
```
### Arguments
| | |
| --- | --- |
| `file` | character; the vignette source file. |
| `dir` | character; the working directory in which the intermediate and output files will be produced. |
| `weave` | logical; should weave be run? |
| `latex` | logical; should [texi2pdf](texi2dvi) be run if weaving produces a ‘.tex’ file? |
| `tangle` | logical; should tangle be run? |
| `quiet` | logical; run in quiet mode? |
| `clean` | logical; whether to remove some newly created, often intermediate, files. See details below. |
| `keep` | a list of file names to keep in any case when cleaning. Note that “target” files are kept anyway. |
| `engine` | `NULL` or character; name of vignette engine to use. Overrides any `\VignetteEngine{}` markup in the vignette. |
| `buildPkg` | `NULL` or a character vector; optional packages in which to find the vignette engine. |
| `encoding` | the encoding to assume for the file. If not specified, it will be read if possible from the file's contents. Note that if the vignette is part of a package, `[buildVignettes](buildvignettes)` reads the package's encoding from the ‘DESCRIPTION’ file but this function does not. |
| `...` | Additional arguments passed to weave and tangle. |
### Details
This function determines the vignette engine for the vignette (default `utils::Sweave`), then weaves and/or tangles the vignette using that engine. Finally, if `clean` is `TRUE`, newly created intermediate files (non “targets”, where these depend on the engine, etc, and not any in `keep`) will be deleted. If `clean` is `NA`, and `weave` is true, newly created intermediate output files (e.g., ‘.tex’) will not be deleted even if a ‘.pdf’ file has been produced from them.
If `buildPkg` is specified, those packages will be loaded before the vignette is processed and will be used as the default packages in the search for a vignette engine, but an explicitly specified package in the vignette source (e.g., using `\VignetteEngine{utils::Sweave}` to specify the `Sweave` engine in the utils package) will override it. In contrast, if the `engine` argument is given, it will override the vignette source.
### Value
A character vector naming the files that have been produced.
### Author(s)
Henrik Bengtsson and Duncan Murdoch
### See Also
`[buildVignettes](buildvignettes)` for building all vignettes in a package.
r None
`parse_Rd` Parse an Rd File
----------------------------
### Description
This function reads an R documentation (Rd) file and parses it, for processing by other functions.
### Usage
```
parse_Rd(file, srcfile = NULL, encoding = "unknown",
verbose = FALSE, fragment = FALSE, warningCalls = TRUE,
macros = file.path(R.home("share"), "Rd", "macros", "system.Rd"),
permissive = FALSE)
## S3 method for class 'Rd'
print(x, deparse = FALSE, ...)
## S3 method for class 'Rd'
as.character(x, deparse = FALSE, ...)
```
### Arguments
| | |
| --- | --- |
| `file` | A filename or text-mode connection. At present filenames work best. |
| `srcfile` | `NULL`, or a `"srcfile"` object. See the ‘Details’ section. |
| `encoding` | Encoding to be assumed for input strings. |
| `verbose` | Logical indicating whether detailed parsing information should be printed. |
| `fragment` | Logical indicating whether file represents a complete Rd file, or a fragment. |
| `warningCalls` | Logical: should parser warnings include the call? |
| `macros` | Filename or environment from which to load additional macros, or a logical value. See the Details below. |
| `permissive` | Logical indicating that unrecognized macros should be treated as text with no warning. |
| `x` | An object of class Rd. |
| `deparse` | If `TRUE`, attempt to reinstate the escape characters so that the resulting characters will parse to the same object. |
| `...` | Further arguments to be passed to or from other methods. |
### Details
This function parses ‘Rd’ files according to the specification given in <https://developer.r-project.org/parseRd.pdf>.
It generates a warning for each parse error and attempts to continue parsing. In order to continue, it is generally necessary to drop some parts of the file, so such warnings should not be ignored.
Files without a marked encoding are by default assumed to be in the native encoding. An alternate default can be set using the `encoding` argument. All text in files is translated to the UTF-8 encoding in the parsed object.
As from **R** version 3.2.0, User-defined macros may be given in a separate file using \newcommand or \renewcommand. An environment may also be given: it would be produced by `[loadRdMacros](loadrdmacros)`, `[loadPkgRdMacros](loadrdmacros)`, or by a previous call to `parse_Rd`. If a logical value is given, only the default built-in macros will be used; `FALSE` indicates that no `"macros"` attribute will be returned with the result.
The `permissive` argument allows text to be parsed that is not completely in Rd format. Typically it would be LaTeX code, used in an Rd fragment, e.g. in a `[bibentry](../../utils/html/bibentry)`. With `permissive = TRUE`, this will be passed through as plain text. Since `parse_Rd` doesn't know how many arguments belong in LaTeX macros, it will guess based on the presence of braces after the macro; this is not infallible.
### Value
`parse_Rd` returns an object of class `"Rd"`. The internal format of this object is subject to change. The `as.character()` and `print()` methods defined for the class return character vectors and print them, respectively.
Unless `macros = FALSE`, the object will have an attribute named `"macros"`, which is an environment containing the macros defined in `file`, in a format that can be used for further `parse_Rd` calls in the same session. It is not guaranteed to work if saved to a file and reloaded in a different session.
### Author(s)
Duncan Murdoch
### References
<https://developer.r-project.org/parseRd.pdf>
### See Also
`[Rd2HTML](rd2html)` for the converters that use the output of `parse_Rd()`.
r None
`RdTextFilter` Select Text in an Rd File
-----------------------------------------
### Description
This function blanks out all non-text in an Rd file, for spell checking or other uses.
### Usage
```
RdTextFilter(ifile, encoding = "unknown", keepSpacing = TRUE,
drop = character(), keep = character(),
macros = file.path(R.home("share"), "Rd", "macros", "system.Rd"))
```
### Arguments
| | |
| --- | --- |
| `ifile` | An input file specified as a filename or connection, or an `"Rd"` object from `[parse\_Rd](parse_rd)`. |
| `encoding` | An encoding name to pass to `[parse\_Rd](parse_rd)`. |
| `keepSpacing` | Whether to try to leave the text in the same lines and columns as in the original file. |
| `drop` | Additional sections of the Rd to drop. |
| `keep` | Sections of the Rd file to keep. |
| `macros` | Macro definitions to assume when parsing. See `[parse\_Rd](parse_rd)`. |
### Details
This function parses the Rd file, then walks through it, element by element. Items with tag `"TEXT"` are kept in the same position as they appeared in the original file, while other parts of the file are replaced with blanks, so a spell checker such as `[aspell](../../utils/html/aspell)` can check only the text and report the position in the original file. (If `keepSpacing` is `FALSE`, blank filling will not occur, and text will not be output in its original location.)
By default, the tags `\S3method`, `\S4method`, `\command`, `\docType`, `\email`, `\encoding`, `\file`, `\keyword`, `\link`, `\linkS4class`, `\method`, `\pkg`, and `\var` are skipped. Additional tags can be skipped by listing them in the `drop` argument; listing tags in the `keep` argument will stop them from being skipped. It is also possible to `keep` any of the `c("RCODE", "COMMENT", "VERB")` tags, which correspond to R-like code, comments, and verbatim text respectively, or to drop `"TEXT"`.
### Value
A character vector which if written to a file, one element per line, would duplicate the text elements of the original Rd file.
### Note
The filter attempts to merge text elements into single words when markup in the Rd file is used to highlight just the start of a word.
### Author(s)
Duncan Murdoch
### See Also
`[aspell](../../utils/html/aspell)`, for which this is an acceptable `filter`.
r None
`HTMLheader` Generate a Standard HTML Header for R Help
--------------------------------------------------------
### Description
This function generates the standard HTML header used on R help pages.
### Usage
```
HTMLheader(title = "R", logo = TRUE, up = NULL,
top = file.path(Rhome, "doc/html/index.html"),
Rhome = "",
css = file.path(Rhome, "doc/html/R.css"),
headerTitle = paste("R:", title),
outputEncoding = "UTF-8")
```
### Arguments
| | |
| --- | --- |
| `title` | The title to display and use in the HTML headers. Should have had any HTML escaping already done. |
| `logo` | Whether to display the **R** logo after the title. |
| `up` | Which page (if any) to link to on the “up” button. |
| `top` | Which page (if any) to link to on the “top” button. |
| `Rhome` | A **relative** path to the R home directory. See the ‘Details’. |
| `css` | The relative URL for the Cascading Style Sheet. |
| `headerTitle` | The title used in the headers. |
| `outputEncoding` | The declared encoding for the whole page. |
### Details
The `up` and `top` links should be relative to the current page. The `Rhome` path default works with dynamic help; for static help, a relative path (e.g., ‘../..’) to it should be used.
### Value
A character vector containing the lines of an HTML header which can be used to start a page in the R help system.
### Examples
```
cat(HTMLheader("This is a sample header"), sep="\n")
```
r None
`charsets` Conversion Tables between Character Sets
----------------------------------------------------
### Description
`charset_to_Unicode` is a matrix of Unicode code points with columns for the common 8-bit encodings.
`Adobe_glyphs` is a data frame which gives Adobe glyph names for Unicode code points. It has two character columns, `"adobe"` and `"unicode"` (a 4-digit hex representation).
### Usage
```
charset_to_Unicode
Adobe_glyphs
```
### Details
`charset_to_Unicode` is an integer matrix of class `c("[noquote](../../base/html/noquote)", "[hexmode](../../base/html/hexmode)")` so prints in hexadecimal. The mappings are those used by `libiconv`: there are differences in the way quotes and minus/hyphen are mapped between sources (and the postscript encoding files use a different mapping).
`Adobe_glyphs` includes all the Adobe glyph names which correspond to single Unicode characters. It is sorted by Unicode code point and within a point alphabetically on the glyph (there can be more than one name for a Unicode code point). The data are in the file ‘[R\_HOME](../../base/html/rhome)/share/encodings/Adobe\_glyphlist’.
### Examples
```
## find Adobe names for ISOLatin2 chars.
latin2 <- charset_to_Unicode[, "ISOLatin2"]
aUnicode <- as.hexmode(paste0("0x", Adobe_glyphs$unicode))
keep <- aUnicode %in% latin2
aUnicode <- aUnicode[keep]
aAdobe <- Adobe_glyphs[keep, 1]
## first match
aLatin2 <- aAdobe[match(latin2, aUnicode)]
## all matches
bLatin2 <- lapply(1:256, function(x) aAdobe[aUnicode == latin2[x]])
format(bLatin2, justify = "none")
```
r None
`parseLatex` Experimental Functions to Work with LaTeX Code
------------------------------------------------------------
### Description
The `parseLatex` function parses LaTeX source, producing a structured object; `deparseLatex` reverses the process. The `latexToUtf8` function takes a LaTeX object, and processes a number of different macros to convert them into the corresponding UTF-8 characters.
### Usage
```
parseLatex(text, filename = deparse1(substitute(text)),
verbose = FALSE,
verbatim = c("verbatim", "verbatim*",
"Sinput", "Soutput"))
deparseLatex(x, dropBraces = FALSE)
latexToUtf8(x)
```
### Arguments
| | |
| --- | --- |
| `text` | A character vector containing LaTeX source code. |
| `filename` | A filename to use in syntax error messages. |
| `verbose` | If `TRUE`, print debug error messages. |
| `verbatim` | A character vector containing the names of LaTeX environments holding verbatim text. |
| `x` | A `"LaTeX"` object. |
| `dropBraces` | Drop unnecessary braces when displaying a `"LaTeX"` object. |
### Details
The parser does not recognize all legal LaTeX code, only relatively simple examples. It does not associate arguments with macros, that needs to be done after parsing, with knowledge of the definitions of each macro. The main intention for this function is to process simple LaTeX code used in bibliographic references, not fully general LaTeX documents.
Verbose text is allowed in two forms: the `\verb` macro (with single character delimiters), and environments whose names are listed in the `verbatim` argument.
### Value
The `parseLatex()` function returns a recursive object of class `"LaTeX"`. Each of the entries in this object will have a `"latex_tag"` attribute identifying its syntactic role.
The `deparseLatex()` function returns a single element character vector, possibly containing embedded newlines.
The `latexToUtf8()` function returns a modified version of the `"LaTeX"` object that was passed to it.
### Author(s)
Duncan Murdoch
### Examples
```
latex <- parseLatex("fa\\c{c}ile")
deparseLatex(latexToUtf8(latex))
```
r None
`Rdindex` Generate Index from Rd Files
---------------------------------------
### Description
Print a 2-column index table with names and titles from given R documentation files to a given output file or connection. The titles are nicely formatted between two column positions (typically 25 and 72, respectively).
### Usage
```
Rdindex(RdFiles, outFile = "", type = NULL,
width = 0.9 * getOption("width"), indent = NULL)
```
### Arguments
| | |
| --- | --- |
| `RdFiles` | a character vector specifying the Rd files to be used for creating the index, either by giving the paths to the files, or the path to a single directory with the sources of a package. |
| `outFile` | a connection, or a character string naming the output file to print to. `""` (the default) indicates output to the console. |
| `type` | a character string giving the documentation type of the Rd files to be included in the index, or `NULL` (the default). The type of an Rd file is typically specified via the `\docType` tag; if `type` is `"data"`, Rd files whose *only* keyword is `datasets` are included as well. |
| `width` | a positive integer giving the target column for wrapping lines in the output. |
| `indent` | a positive integer specifying the indentation of the second column. Must not be greater than `width/2`, and defaults to `width/3`. |
### Details
If a name is not a valid alias, the first alias (or the empty string if there is none) is used instead.
r None
`testInstalledPackage` Test Installed Packages
-----------------------------------------------
### Description
These functions allow an installed package to be tested, or all base and recommended packages.
### Usage
```
testInstalledPackage(pkg, lib.loc = NULL, outDir = ".",
types = c("examples", "tests", "vignettes"),
srcdir = NULL, Ropts = "", ...)
testInstalledPackages(outDir = ".", errorsAreFatal = TRUE,
scope = c("both", "base", "recommended"),
types = c("examples", "tests", "vignettes"),
srcdir = NULL, Ropts = "", ...)
testInstalledBasic(scope = c("basic", "devel", "both", "internet"))
```
### Arguments
| | |
| --- | --- |
| `pkg` | name of an installed package. |
| `lib.loc` | library path(s) in which to look for the package. See `[library](../../base/html/library)`. |
| `outDir` | the directory into which to write the output files: this should already exist. |
| `types` | type(s) of tests to be done. |
| `errorsAreFatal` | logical: should testing terminate at the first error? |
| `srcdir` | Optional directory to look for `.save` files. |
| `Ropts` | Additional options such as -d valgrind to be passed to `R CMD BATCH` when running examples or tests. |
| `...` | additional arguments use when preparing the files to be run, e.g. `commentDontrun` and `commentDonttest`. |
| `scope` | Which set(s) should be tested? Can be abbreviated. |
### Details
These tests depend on having the package example files installed (which is the default). If package-specific tests are found in a ‘tests’ directory they can be tested: these are not installed by default, but will be if `R CMD INSTALL --install-tests` was used. Finally, the **R** code in any vignettes can be extracted and tested.
Package tests are run in a ‘pkg-tests’ subdirectory of ‘outDir’, and leave their output there.
`testInstalledBasic` runs the basic tests, if installed. This should be run with `LC_COLLATE=C` set: the function tries to set this by it may not work on all OSes. For non-English locales it may be desirable to set environment variables LANGUAGE to en and LC\_TIME to C to reduce the number of differences from reference results.
Except on Windows, if the environment variable TEST\_MC\_CORES is set to an integer greater than one, `testInstalledPackages` will run the package tests in parallel using its value as the maximum number of parallel processes.
The package-specific tests for the base and recommended packages are not normally installed, but `make install-tests` is provided to do so (as well as the basic tests).
### Value
Invisibly `0L` for success, `1L` for failure.
| programming_docs |
r None
`xgettext` Extract Translatable Messages from R Files in a Package
-------------------------------------------------------------------
### Description
For each file in the ‘R’ directory (including system-specific subdirectories) of a package, extract the unique arguments passed to `[stop](../../base/html/stop)`, `[warning](../../base/html/warning)`, `[message](../../base/html/message)`, `[gettext](../../base/html/gettext)` and `[gettextf](../../base/html/sprintf)`, or to `[ngettext](../../base/html/gettext)`.
### Usage
```
xgettext(dir, verbose = FALSE, asCall = TRUE)
xngettext(dir, verbose = FALSE)
xgettext2pot(dir, potFile, name = "R", version, bugs)
```
### Arguments
| | |
| --- | --- |
| `dir` | the directory of a source package. |
| `verbose` | logical: should each file be listed as it is processed? |
| `asCall` | logical: if `TRUE` each argument is returned whole, otherwise the strings within each argument are extracted. |
| `potFile` | name of `po` template file to be produced. Defaults to ‘R-pkgname.pot’ where pkgname is the basename of ‘dir’. |
| `name, version, bugs` | as recorded in the template file: `version` defaults the version number of the currently running **R**, and `bugs` to `"bugs.r-project.org"`. |
### Details
Leading and trailing white space (space, tab and linefeed) is removed for calls to `gettext`, `gettextf`, `stop`, `warning`, and `message`, as it is by the internal code that passes strings for translation.
We look to see if these functions were called with `domain = NA` and if so omit the call if `asCall = TRUE`: note that the call might contain a call to `gettext` which would be visible if `asCall = FALSE`.
`xgettext2pot` calls `xgettext` and then `xngettext`, and writes a PO template file for use with the GNU Gettext tools. This ensures that the strings for simple translation are unique in the file (as GNU Gettext requires), but does not do so for `ngettext` calls (and the rules are not stated in the Gettext manual, but `msgfmt` complains if there is duplication between the sets.).
If applied to the base package, this also looks in the ‘.R’ files in ‘[R\_HOME](../../base/html/rhome)/share/R’.
### Value
For `xgettext`, a list of objects of class `"xgettext"` (which has a `print` method), one per source file that potentially contains translatable strings.
For `xngettext`, a list of objects of class `"xngettext"`, which are themselves lists of length-2 character strings.
### See Also
`<update_pkg_po>()` which calls `xgettext2pot()`.
### Examples
```
## Not run: ## in a source-directory build of R:
xgettext(file.path(R.home(), "src", "library", "splines"))
## End(Not run)
```
r None
`fileutils` File Utilities
---------------------------
### Description
Utilities for listing files, and manipulating file paths.
### Usage
```
file_ext(x)
file_path_as_absolute(x)
file_path_sans_ext(x, compression = FALSE)
list_files_with_exts(dir, exts, all.files = FALSE,
full.names = TRUE)
list_files_with_type(dir, type, all.files = FALSE,
full.names = TRUE, OS_subdirs = .OStype())
```
### Arguments
| | |
| --- | --- |
| `x` | character vector giving file paths. |
| `compression` | logical: should compression extension ‘.gz’, ‘.bz2’ or ‘.xz’ be removed first? |
| `dir` | a character string with the path name to a directory. |
| `exts` | a character vector of possible file extensions (excluding the leading dot). |
| `all.files` | a logical. If `FALSE` (default), only visible files are considered; if `TRUE`, all files are used. |
| `full.names` | a logical indicating whether the full paths of the files found are returned (default), or just the file names. |
| `type` | a character string giving the ‘type’ of the files to be listed, as characterized by their extensions. Currently, possible values are `"code"` (R code), `"data"` (data sets), `"demo"` (demos), `"docs"` (R documentation), and `"vignette"` (vignettes). |
| `OS_subdirs` | a character vector with the names of OS-specific subdirectories to possibly include in the listing of R code and documentation files. By default, the value of the environment variable R\_OSTYPE, or if this is empty, the value of `[.Platform](../../base/html/platform)$OS.type`, is used. |
### Details
`file_ext` returns the file (name) extensions (excluding the leading dot). (Only purely alphanumeric extensions are recognized.)
`file_path_as_absolute` turns a possibly relative file path absolute, performing tilde expansion if necessary. This is a wrapper for `[normalizePath](../../base/html/normalizepath)`. Currently, `x` must be a single existing path.
`file_path_sans_ext` returns the file paths without extensions (and the leading dot). (Only purely alphanumeric extensions are recognized.)
`list_files_with_exts` returns the paths or names of the files in directory `dir` with extension matching one of the elements of `exts`. Note that by default, full paths are returned, and that only visible files are used.
`list_files_with_type` returns the paths of the files in `dir` of the given ‘type’, as determined by the extensions recognized by **R**. When listing R code and documentation files, files in OS-specific subdirectories are included if present according to the value of `OS_subdirs`. Note that by default, full paths are returned, and that only visible files are used.
### See Also
`[file.path](../../base/html/file.path)`, `[file.info](../../base/html/file.info)`, `[list.files](../../base/html/list.files)`
### Examples
```
dir <- file.path(R.home(), "library", "stats")
list_files_with_exts(file.path(dir, "demo"), "R")
list_files_with_type(file.path(dir, "demo"), "demo") # the same
file_path_sans_ext(list.files(file.path(R.home("modules"))))
```
r None
`pskill` Kill a Process
------------------------
### Description
`pskill` sends a signal to a process, usually to terminate it.
### Usage
```
pskill(pid, signal = SIGTERM)
SIGHUP
SIGINT
SIGQUIT
SIGKILL
SIGTERM
SIGSTOP
SIGTSTP
SIGCHLD
SIGUSR1
SIGUSR2
```
### Arguments
| | |
| --- | --- |
| `pid` | positive integers: one or more process IDs as returned by `[Sys.getpid](../../base/html/sys.getpid)`. |
| `signal` | integer, most often one of the symbolic constants. |
### Details
Signals are a C99 concept, but only a small number are required to be supported (of those listed, only `SIGINT` and `SIGTERM`). They are much more widely used on POSIX operating systems (which should define all of those listed here), which also support a `kill` system call to send a signal to a process, most often to terminate it. Function `pskill` provides a wrapper: it silently ignores invalid values of its arguments, including zero or negative pids.
In normal use on a Unix-alike, `Ctrl-C` sends `SIGINT`, `Ctrl-\` sends `SIGQUIT` and `Ctrl-Z` sends `SIGTSTP`: that and `SIGSTOP` suspend a process which can be resumed by `SIGCONT`.
The signals are small integers, but the actual numeric values are not standardized (and most do differ between OSes). The `SIG*` objects contain the appropriate integer values for the current platform (or `NA_INTEGER_` if the signal is not defined).
Only `SIGINT` and `SIGKILL` will be defined on Windows, and `pskill` will always use the Windows system call `TerminateProcess`.
### Value
A logical vector of the same length as `pid`, `TRUE` (for success) or `FALSE`, invisibly.
### See Also
Package parallel has several means to launch child processes which record the process IDs.
`<psnice>`
### Examples
```
## Not run:
pskill(c(237, 245), SIGKILL)
## End(Not run)
```
r None
`print.via.format` Printing Utilities
--------------------------------------
### Description
`.print.via.format` is a “prototype” `[print](../../base/html/print)()` method, useful, at least as a start, by a simple
```
print.<myS3class> <- .print.via.format
```
### Usage
```
.print.via.format(x, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | object to be printed. |
| `...` | optional further arguments, passed to `[format](../../base/html/format)`. |
### Value
`x`, invisibly (by `[invisible](../../base/html/invisible)()`), as `[print](../../base/html/print)` methods should.
### See Also
The `[print](../../base/html/print)` generic; its default method `[print.default](../../base/html/print.default)` (used for many basic implicit classes such as `"numeric"`, `"character"` and `[array](../../base/html/array)`s of them, `[list](../../base/html/list)`s etc).
### Examples
```
## The function is simply defined as
function (x, ...) {
writeLines(format(x, ...))
invisible(x)
}
## is used for simple print methods in R, and as prototype for new methods.
```
r None
`checkTnF` Check R Packages or Code for T/F
--------------------------------------------
### Description
Checks the specified R package or code file for occurrences of `T` or `F`, and gathers the expression containing these. This is useful as in R `T` and `F` are just variables which are set to the logicals `TRUE` and `FALSE` by default, but are not reserved words and hence can be overwritten by the user. Hence, one should always use `TRUE` and `FALSE` for the logicals.
### Usage
```
checkTnF(package, dir, file, lib.loc = NULL)
```
### Arguments
| | |
| --- | --- |
| `package` | a character string naming an installed package. If given, the installed R code and the examples in the documentation files of the package are checked. R code installed as an image file cannot be checked. |
| `dir` | a character string specifying the path to a package's root source directory. This must contain the subdirectory ‘R’ (for R code), and should also contain ‘man’ (for documentation). Only used if `package` is not given. If used, the R code files and the examples in the documentation files are checked. |
| `file` | the name of a file containing R code to be checked. Used if neither `package` nor `dir` are given. |
| `lib.loc` | a character vector of directory names of **R** libraries, or `NULL`. The default value of `NULL` corresponds to all libraries currently known. The specified library trees are used to search for `package`. |
### Value
An object of class `"checkTnF"` which is a list containing, for each file where occurrences of `T` or `F` were found, a list with the expressions containing these occurrences. The names of the list are the corresponding file names.
There is a `print` method for nicely displaying the information contained in such objects.
r None
`userdir` R User Directories
-----------------------------
### Description
Directories for storing R-related user-specific data, configuration and cache files.
### Usage
```
R_user_dir(package, which = c("data", "config", "cache"))
```
### Arguments
| | |
| --- | --- |
| `package` | a character string giving the name of an **R** package |
| `which` | a character string indicating the kind of file(s) of interest. Can be abbreviated. |
### Details
For desktop environments using X Windows, the freedesktop.org project (formerly X Desktop Group, XDG) developed the XDG Base Directory Specification (<https://specifications.freedesktop.org/basedir-spec>) for standardizing the location where certain files should be placed. CRAN package [rappdirs](https://CRAN.R-project.org/package=rappdirs) provides these general locations with appropriate values for all platforms for which **R** is available.
`R_user_dir` specializes the general mechanism to R package specific locations for user files, by providing package specific subdirectories inside a ‘R’ subdirectory inside the “base” directories appropriate for user-specific data, configuration and cache files (see the examples), with the intent that packages will not interfere if they work within their respective subdirectories.
The locations of these base directories can be customized via the specific environment variables R\_USER\_DATA\_DIR, R\_USER\_CONFIG\_DIR and R\_USER\_CACHE\_DIR. If these are not set, the general XDG-style environment variables XDG\_DATA\_HOME, XDG\_CONFIG\_HOME and XDG\_CACHE\_HOME are used if set, and otherwise, defaults appropriate for the **R** platform in use are employed.
### Examples
```
## IGNORE_RDIFF_BEGIN
R_user_dir("FOO", "cache")
## IGNORE_RDIFF_END
```
r None
`makevars` User and Site Compilation Variables
-----------------------------------------------
### Description
Determine the location of the user and site specific ‘Makevars’ files for customizing package compilation.
### Usage
```
makevars_user()
makevars_site()
```
### Details
Package maintainers can use these functions to employ user and site specific compilation settings also for compilations not using **R**'s mechanisms (in particular, custom compilations in subdirectories of ‘src’), e.g., by adding configure code calling **R** with `cat(tools::makevars_user())` or `cat(tools::makevars_site())`, and if non-empty passing this with -f to custom Make invocations.
### Value
A character string with the path to the user or site specific ‘Makevars’ file, or an empty character vector if there is no such file.
### See Also
Section ‘Customizing package compilation’ in the ‘R Installation and Administration’ manual.
### Examples
```
makevars_user()
makevars_site()
```
r None
`checkFF` Check Foreign Function Calls
---------------------------------------
### Description
Performs checks on calls to compiled code from R code. Currently only checks whether the interface functions such as `.C` and `.Fortran` are called with a `"[NativeSymbolInfo](../../base/html/getnativesymbolinfo)"` first argument or with argument `PACKAGE` specified, which is highly recommended to avoid name clashes in foreign function calls.
### Usage
```
checkFF(package, dir, file, lib.loc = NULL,
registration = FALSE, check_DUP = FALSE,
verbose = getOption("verbose"))
```
### Arguments
| | |
| --- | --- |
| `package` | a character string naming an installed package. If given, the installed R code of the package is checked. |
| `dir` | a character string specifying the path to a package's root source directory. This should contain the subdirectory ‘R’ (for R code). Only used if `package` is not given. |
| `file` | the name of a file containing R code to be checked. Used if neither `package` nor `dir` are given. |
| `lib.loc` | a character vector of directory names of **R** libraries, or `NULL`. The default value of `NULL` corresponds to all libraries currently known. The specified library trees are used to search for `package`. |
| `registration` | a logical. If `TRUE`, checks the registration information on the call (if available). |
| `check_DUP` | a logical. If `TRUE`, `.C` and `.Fortran` calls with `DUP = FALSE` are reported. |
| `verbose` | a logical. If `TRUE`, additional diagnostics are printed (and the result is returned invisibly). |
### Details
Note that we can only check if the `name` argument is a symbol or a character string, not what class of object the symbol resolves to at run-time.
If the package has a namespace which contains a `useDynLib` directive, calls in top-level functions in the package are not reported as their symbols will be preferentially looked up in the DLL named in the first `useDynLib` directive.
This checks that calls with `PACKAGE` specified are to the same package, and reports separately those which are in base packages and those which are in other packages (and if those packages are specified in the ‘DESCRIPTION’ file).
### Value
An object of class `"checkFF"`.
There are `[format](../../base/html/format)` and `print` methods to display the information contained in such objects.
### See Also
`[.C](../../base/html/foreign)`, `[.Fortran](../../base/html/foreign)`; `[Foreign](../../base/html/foreign)`.
### Examples
```
# order is pretty much random
checkFF(package = "stats", verbose = TRUE)
```
r None
`make_translations_pkg` Package the Current Translations in the R Sources
--------------------------------------------------------------------------
### Description
A utility for R Core members to prepare a package of updated translations.
### Usage
```
make_translations_pkg(srcdir, outDir = ".", append = "-1")
```
### Arguments
| | |
| --- | --- |
| `srcdir` | The **R** source directory. |
| `outDir` | The directory into which to place the prepared package. |
| `append` | The suffix for the package version number, e.g. `3.0.0-1` will be the default in **R** 3.0.0. |
### Details
This extracts the translations in a current **R** source distribution and packages them as a source package called translations which can be distributed on CRAN and installed by `[update.packages](../../utils/html/update.packages)`. This allows e.g. the translations shipped in **R** 3.x.y to be updated to those currently in R-patched, even by a user without administrative privileges.
The package has a Depends field which restricts it to versions `3.x.*` for a single `x`.
r None
`SweaveTeXFilter` Strip R Code out of Sweave File
--------------------------------------------------
### Description
This function blanks out code chunks and Noweb markup in an Sweave input file, for spell checking or other uses.
### Usage
```
SweaveTeXFilter(ifile, encoding = "unknown")
```
### Arguments
| | |
| --- | --- |
| `ifile` | Input file or connection. |
| `encoding` | Text encoding to pass to `[readLines](../../base/html/readlines)`. |
### Details
This function blanks out all Noweb markup and code chunks from an Sweave input file, leaving behind the LaTeX source, so that a LaTeX-aware spelling checker can check it and report errors in their original locations.
### Value
A character vector which if written to a file, one element per line, would duplicate the text elements of the original Sweave input file.
### Author(s)
Duncan Murdoch
### See Also
`[aspell](../../utils/html/aspell)`, for which this is used with `filter = "Sweave"`.
r None
`toHTML` Display an Object in HTML
-----------------------------------
### Description
This generic function generates a complete HTML page from an object.
### Usage
```
toHTML(x, ...)
## S3 method for class 'packageIQR'
toHTML(x, ...)
## S3 method for class 'news_db'
toHTML(x, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | An object to display. |
| `...` | Optional parameters for methods; the `"packageIQR"` and `"news_db"` methods pass these to `[HTMLheader](htmlheader)`. |
### Value
A character vector to display the object `x`. The `"packageIQR"` method is designed to display lists in the **R** help system.
### See Also
`[HTMLheader](htmlheader)`
### Examples
```
cat(toHTML(demo(package = "base")), sep = "\n")
```
r None
`tools-package` Tools for Package Development
----------------------------------------------
### Description
Tools for package development, administration and documentation.
### Details
This package contains tools for manipulating R packages and their documentation.
For a complete list of functions, use `library(help = "tools")`.
### Author(s)
Kurt Hornik and Friedrich Leisch
Maintainer: R Core Team [[email protected]](mailto:[email protected])
r None
`checkVignettes` Check Package Vignettes
-----------------------------------------
### Description
Check all vignettes of a package by running `[Sweave](../../utils/html/sweave)` (or other custom weave function) and/or `[Stangle](../../utils/html/sweave)` (or other custom tangle function) on them. All R source code files found after the tangling step are `[source](../../base/html/source)`ed to check whether all code can be executed without errors.
### Usage
```
checkVignettes(package, dir, lib.loc = NULL,
tangle = TRUE, weave = TRUE, latex = FALSE,
workdir = c("tmp", "src", "cur"),
keepfiles = FALSE)
```
### Arguments
| | |
| --- | --- |
| `package` | a character string naming an installed package. If given, vignette source files are looked for in subdirectory ‘doc’. |
| `dir` | a character string specifying the path to a package's root source directory. If given, vignette source files are looked for in subdirectory ‘vignettes’. |
| `lib.loc` | a character vector of directory names of **R** libraries, or `NULL`. The default value of `NULL` corresponds to all libraries currently known. The specified library trees are used to search for `package`. |
| `tangle` | Perform a tangle and `[source](../../base/html/source)` the extracted code? |
| `weave` | Perform a weave? |
| `latex` | logical: if `weave` and `latex` are `TRUE` and there is no ‘Makefile’ in the vignettes directory, run the intermediate ‘.tex’ outputs from weaving through `[texi2pdf](texi2dvi)`. |
| `workdir` | Directory used as working directory while checking the vignettes. If `"tmp"` then a temporary directory is created, this is the default. If `"src"` then the directory containing the vignettes itself is used, if `"cur"` then the current working directory of **R** is used. |
| | |
| --- | --- |
| `keepfiles` | Delete files in the temporary directory? This option is ignored when `workdir != "tmp"`. |
### Details
This function first uses `[pkgVignettes](buildvignettes)` to find the package vignettes, and in particular their vignette engines (see `[vignetteEngine](vignetteengine)`).
If `tangle` is true, it then runs `[Stangle](../../utils/html/sweave)` (or other custom tangle function provided by the engine) to produce (one or more) **R** code files from each vignette, then `[source](../../base/html/source)`s each code file in turn.
If `weave` is true, the vignettes are run through `[Sweave](../../utils/html/sweave)` (or other custom weave function provided by the engine). If `latex` is also true and there is no ‘Makefile’ in the vignettes directory, `[texi2pdf](texi2dvi)` is run on the intermediate ‘.tex’ files from weaving for those vignettes which did not give errors in the previous steps.
### Value
An object of class `"checkVignettes"`, which is a list with the error messages found during the tangle, source, weave and latex steps. There is a `print` method for displaying the information contained in such objects.
| programming_docs |
r None
`Rcmd` R CMD Interface
-----------------------
### Description
Invoke `R CMD` tools from within **R**.
### Usage
```
Rcmd(args, ...)
```
### Arguments
| | |
| --- | --- |
| `args` | a character vector of arguments to `R CMD`. |
| `...` | arguments to be passed to `[system2](../../base/html/system2)`. |
### Details
Provides a portable convenience interface to the `R CMD` mechanism by invoking the corresponding system commands (using the version of **R** currently used) via `[system2](../../base/html/system2)`.
### Value
See section “Value” in `[system2](../../base/html/system2)`.
r None
`undoc` Find Undocumented Objects
----------------------------------
### Description
Finds the objects in a package which are undocumented, in the sense that they are visible to the user (or data objects or S4 classes provided by the package), but no documentation entry exists.
### Usage
```
undoc(package, dir, lib.loc = NULL)
```
### Arguments
| | |
| --- | --- |
| `package` | a character string naming an installed package. |
| `dir` | a character string specifying the path to a package's root source directory. This must contain the subdirectory ‘man’ with **R** documentation sources (in Rd format), and at least one of the ‘R’ or ‘data’ subdirectories with **R** code or data objects, respectively. |
| `lib.loc` | a character vector of directory names of **R** libraries, or `NULL`. The default value of `NULL` corresponds to all libraries currently known. The specified library trees are used to search for `package`. |
### Details
This function is useful for package maintainers mostly. In principle, *all* user-level **R** objects should be documented.
The base package is special as it contains the primitives and these do not have definitions available at code level. We provide equivalent closures in environments `.ArgsEnv` and `.GenericArgsEnv` in the base package that are used for various purposes: `undoc("base")` checks that all the primitives that are not language constructs are prototyped in those environments and no others are.
### Value
An object of class `"undoc"` which is a list of character vectors containing the names of the undocumented objects split according to documentation type.
There is a `print` method for nicely displaying the information contained in such objects.
### See Also
`<codoc>`, `[QC](qc)`
### Examples
```
undoc("tools") # Undocumented objects in 'tools'
```
r None
`psnice` Get or Set the Priority (Niceness) of a Process
---------------------------------------------------------
### Description
Get or set the ‘niceness’ of the current process, or one or more other processes.
### Usage
```
psnice(pid = Sys.getpid(), value = NA_integer_)
```
### Arguments
| | |
| --- | --- |
| `pid` | positive integers: the process IDs of one of more processes: defaults to the **R** session process. |
| `value` | The niceness to be set, or `NA` for an enquiry. |
### Details
POSIX operating systems have a concept of process priorities, usually from 0 to 39 (or 40) with 20 being a normal priority and (somewhat confusingly) larger numeric values denoting lower priority. To add to the confusion, there is a ‘niceness’ value, the amount by which the priority numerically exceeds 20 (which can be negative). Processes with high niceness will receive less CPU time than those with normal priority. On some OSes, processes with niceness `+19` are only run when the system would otherwise be idle.
On many OSes utilities such as `top` report the priority and not the niceness. Niceness is used by the utility ‘/usr/bin/renice’: ‘/usr/bin/nice’ (and `/usr/bin/renice -n`) specifies an *increment* in niceness.
Only privileged users (usually super-users) can lower the niceness.
Windows has a slightly different concept of ‘priority classes’. We have mapped the idle priority to niceness `19`, ‘below normal’ to `15`, normal to `0`, ‘above normal’ to `-5` and ‘realtime’ to `-10`. Unlike Unix-alikes, a non-privileged user can increase the priority class on Windows (but using ‘realtime’ is inadvisable).
### Value
An integer vector of *previous* niceness values, `NA` if unknown for any reason.
### See Also
Various functions in package parallel create child processes whose priority may need to be changed.
`<pskill>`.
r None
`md5sum` Compute MD5 Checksums
-------------------------------
### Description
Compute the 32-byte MD5 hashes of one or more files.
### Usage
```
md5sum(files)
```
### Arguments
| | |
| --- | --- |
| `files` | character. The paths of file(s) whose contents are to be hashed. |
### Details
A MD5 ‘hash’ or ‘checksum’ or ‘message digest’ is a 128-bit summary of the file contents represented by 32 hexadecimal digits. Files with different MD5 sums are different: only very exceptionally (and usually with the intent to deceive) are those with the same sums different.
On Windows all files are read in binary mode (as the `md5sum` utilities there do): on other OSes the files are read in the default mode (almost always text mode where there is more than one).
MD5 sums are used as a check that **R** packages have been unpacked correctly and not subsequently modified.
### Value
A character vector of the same length as `files`, with names equal to `files` (possibly expanded). The elements will be `NA` for non-existent or unreadable files, otherwise a 32-character string of hexadecimal digits.
### Source
The underlying C code was written by Ulrich Drepper and extracted from a 2001 release of `glibc`.
### See Also
`[checkMD5sums](checkmd5sums)`
### Examples
```
as.vector(md5sum(dir(R.home(), pattern = "^COPY", full.names = TRUE)))
```
r None
`package_dependencies` Computations on the Dependency Hierarchy of Packages
----------------------------------------------------------------------------
### Description
Find (recursively) dependencies or reverse dependencies of packages.
### Usage
```
package_dependencies(packages = NULL, db = NULL, which = "strong",
recursive = FALSE, reverse = FALSE,
verbose = getOption("verbose"))
```
### Arguments
| | |
| --- | --- |
| `packages` | a character vector of package names. |
| `db` | character matrix as from `[available.packages](../../utils/html/available.packages)()` (with the default `NULL` the results of this call) or data frame variants thereof. Alternatively, a package database like the one available from <https://cran.r-project.org/web/packages/packages.rds>. |
| `which` | a character vector listing the types of dependencies, a subset of `c("Depends", "Imports", "LinkingTo", "Suggests", "Enhances")`. Character string `"all"` is shorthand for that vector, character string `"most"` for the same vector without `"Enhances"`, character string `"strong"` (default) for the first three elements of that vector. |
| `recursive` | a logical indicating whether (reverse) dependencies of (reverse) dependencies (and so on) should be included, or a character vector like `which` indicating the type of (reverse) dependencies to be added recursively. |
| `reverse` | logical: if `FALSE` (default), regular dependencies are calculated, otherwise reverse dependencies. |
| `verbose` | logical indicating if output should monitor the package search cycles. |
### Value
Named list with one element for each package in argument `packages`, each consists of a character vector naming the (recursive) (reverse) dependencies of that package.
For given packages which are not found in the db, `NULL` entries are returned, as opposed to `character(0)` entries which indicate no dependencies.
### See Also
`[dependsOnPkgs](dependsonpkgs)`.
### Examples
```
myPkgs <- c("MASS", "Matrix", "KernSmooth", "class", "cluster", "codetools")
pdb <- available.packages()
system.time(
dep1 <- package_dependencies(myPkgs, db = pdb) # all arguments at default
) # very fast
utils::str(dep1, vec.len=10)
system.time( ## reverse dependencies, recursively --- takes much longer:
deps <- package_dependencies(myPkgs, db = pdb, which = "most",
recursive = TRUE, reverse = TRUE)
) # seen ~ 10 seconds
lengths(deps) # 2020-05-03: all are 16053, but codetools with 16057
## install.packages(dependencies = TRUE) installs 'most' dependencies
## and the strong recursive dependencies of these: these dependencies
## can be obtained using 'which = "most"' and 'recursive = "strong"'.
## To illustrate on the the first packages with non-missing Suggests:
packages <- pdb[head(which(!is.na(pdb[, "Suggests"]))), "Package"]
package_dependencies(packages, db = pdb,
which = "most", recursive = "strong")
```
r None
`assertCondition` Asserting Error Conditions
---------------------------------------------
### Description
When testing code, it is not sufficient to check that results are correct, but also that errors or warnings are signalled in appropriate situations. The functions described here provide a convenient facility for doing so. The three functions check that evaluating the supplied expression produces an error, a warning or one of a specified list of conditions, respectively. If the assertion fails, an error is signalled.
### Usage
```
assertError(expr, classes = "error", verbose = FALSE)
assertWarning(expr, classes = "warning", verbose = FALSE)
assertCondition(expr, ..., .exprString = , verbose = FALSE)
```
### Arguments
| | |
| --- | --- |
| `expr` | an unevaluated **R** expression which will be evaluated via `[tryCatch](../../base/html/conditions)(expr, ..)`. |
| `classes, ...` | `[character](../../base/html/character)` strings corresponding to the classes of the conditions that would satisfy the assertion; e.g., `"error"` or `"warning"`. If none are specified, any condition will satisfy the assertion. See the details section. |
| `.exprString` | The string to be printed corresponding to `expr`. By default, the actual `expr` will be deparsed. Will be omitted if the function is supplied with the actual expression to be tested. If `assertCondition()` is called from another function, with the actual expression passed as an argument to that function, supply the deparsed version. |
| `verbose` | If `TRUE`, a message is printed when the condition is satisfied. |
### Details
`assertCondition()` uses the general condition mechanism to check all the conditions generated in evaluating `expr`. The occurrence of any of the supplied condition classes among these satisfies the assertion regardless of what other conditions may be signalled.
`assertError()` is a convenience function for asserting errors; it calls `assertCondition()`.
`assertWarning()` asserts that a warning will be signalled, but *not* an error, whereas `assertCondition(expr, "warning")` will be satisfied even if an error follows the warning. See the examples.
### Value
If the assertion is satisfied, a list of all the condition objects signalled is returned, invisibly. See `[conditionMessage](../../base/html/conditions)` for the interpretation of these objects. Note that *all* conditions signalled during the evaluation are returned, whether or not they were among the requirements.
### Author(s)
John Chambers and Martin Maechler
### See Also
`[stop](../../base/html/stop)`, `[warning](../../base/html/warning)`; `[signalCondition](../../base/html/conditions)`, `[tryCatch](../../base/html/conditions)`.
### Examples
```
assertError(sqrt("abc"))
assertWarning(matrix(1:8, 4,3))
assertCondition( ""-1 ) # ok, any condition would satisfy this
try( assertCondition(sqrt(2), "warning") )
## .. Failed to get warning in evaluating sqrt(2)
assertCondition(sqrt("abc"), "error") # ok
try( assertCondition(sqrt("abc"), "warning") )# -> error: had no warning
assertCondition(sqrt("abc"), "error")
## identical to assertError() call above
assertCondition(matrix(1:5, 2,3), "warning")
try( assertCondition(matrix(1:8, 4,3), "error") )
## .. Failed to get expected error ....
## either warning or worse:
assertCondition(matrix(1:8, 4,3), "error","warning") # OK
assertCondition(matrix(1:8, 4, 3), "warning") # OK
## when both are signalled:
ff <- function() { warning("my warning"); stop("my error") }
assertCondition(ff(), "warning")
## but assertWarning does not allow an error to follow
try(assertWarning(ff()))
assertCondition(ff(), "error") # ok
assertCondition(ff(), "error", "warning") # ok (quietly, catching warning)
## assert that assertC..() does not assert [and use *one* argument only]
assertCondition( assertCondition(sqrt( 2 ), "warning") )
assertCondition( assertCondition(sqrt("abc"), "warning"), "error")
assertCondition( assertCondition(matrix(1:8, 4,3), "error"),
"error")
```
r None
`tools-deprecated` Deprecated Objects in Package tools
-------------------------------------------------------
### Description
The functions or variables listed here are provided for compatibility with older versions of **R** only, and may be defunct as soon as of the next release.
### Usage
```
package.dependencies(x, check = FALSE,
depLevel = c("Depends", "Imports", "Suggests"))
getDepList(depMtrx, instPkgs, recursive = TRUE, local = TRUE,
reduce = TRUE, lib.loc = NULL)
pkgDepends(pkg, recursive = TRUE, local = TRUE, reduce = TRUE,
lib.loc = NULL)
installFoundDepends(depPkgList, ...)
vignetteDepends(vignette, recursive = TRUE, reduce = TRUE,
local = TRUE, lib.loc = NULL)
```
### Arguments
| | |
| --- | --- |
| `x` | A matrix of package descriptions as returned by `[available.packages](../../utils/html/available.packages)`. |
| `check` | If `TRUE`, return logical vector of check results. If `FALSE`, return parsed list of dependencies. |
| `depLevel` | Whether to look for `Depends` or `Suggests` level dependencies. Can be abbreviated. |
| `depMtrx` | a dependency matrix as from `[package.dependencies](tools-deprecated)()`. |
| `pkg` | the name of the package |
| `instPkgs` | a matrix specifying all packages installed on the local system, as from `installed.packages` |
| `recursive` | whether or not to include indirect dependencies. |
| `local` | whether or not to search only locally |
| `reduce` | whether or not to collapse all sets of dependencies to a minimal value |
| `lib.loc` | what libraries to use when looking for installed packages. `NULL` indicates all library directories in the current `.libPaths()`. Note that `lib.loc` is not used in `getDepList()` and deprecated there. |
| `depPkgList` | A `Found` element from a `pkgDependsList` object |
| `...` | Arguments to pass on to `[install.packages](../../utils/html/install.packages)` |
| | |
| --- | --- |
| `vignette` | the path to the vignette source |
### See Also
`[Deprecated](../../base/html/deprecated)`, `[Defunct](../../base/html/defunct)`
r None
`package_native_routine_registration_skeleton` Write Skeleton for Adding Native Routine Registration to a Package
------------------------------------------------------------------------------------------------------------------
### Description
Write a skeleton for adding native routine registration to a package.
### Usage
```
package_native_routine_registration_skeleton(dir, con = stdout(),
align = TRUE, character_only = TRUE, include_declarations = TRUE)
```
### Arguments
| | |
| --- | --- |
| `dir` | Top-level directory of a package. |
| `con` | Connection on which to write the skeleton: can be specified as a file path. |
| `align` | Logical: should the registration tables be lined up in three columns each? |
| `character_only` | Logical: should only `.NAME` arguments specified by character strings (and not as names of **R** objects nor expressions) be extracted? |
| `include_declarations` | Logical: should the output include declarations (also known as ‘prototypes’) for the registered routines? |
### Details
Registration is described in section ‘Registering native routines’ of ‘Writing R Extensions’. This function produces a skeleton of the C code which needs to be added to enable registration, conventionally as file ‘src/init.c’ or appended to the sole C file of the package.
This function examines the code in the ‘R’ directory of the package for calls to `.C`, `.Fortran`, `.Call` and `.External` and creates registration information for those it can make sense of. If the number of arguments used cannot be determined it will be recorded as `-1`: such values should be corrected.
Optionally the skeleton will include declarations for the registered routines: they should be checked against the C/Fortran source code, not least as the number of arguments is taken from the **R** code. For `.Call` and `.External` calls they will often suffice, but for `.C` and `.Fortran` calls the `void *` arguments would ideally be replaced by the actual types. Otherwise declarations need to be included (they may exist earlier in that file if appending to a file, or in a header file which can be included in ‘init.c’).
The default value of `character_only` is appropriate when working on a package without any existing registration: `character_only = FALSE` can be used to suggest updates for a package which has been extended since registration. For the default value, if `.NAME` values are found which are not character strings (e.g. names or expressions) this is noted via a comment in the output.
Packages which used the earlier form of creating **R** objects for native symbols *via* additional arguments in a `useDynLib` directive will probably most easily be updated to use registration with `character_only = FALSE`.
If an entry point is used with different numbers of arguments in the package's **R** code, an entry in the table (and optionally, a declaration) is made for each number, and a comment placed in the output. This needs to be resolved: only `.External` calls can have a variable number of arguments, which should be declared as `-1`.
A surprising number of CRAN packages had calls in **R** code to native routines not included in the package, which will lead to a ‘loading failed’ error during package installation when the registration C code is added.
Calls which do not name a routine such as `.Call(...)` will be silently ignored.
### Value
None: the output is written to the connection `con`.
### Extracting C/C++ prototypes
There are several tools available to extract function declarations from C or C++ code.
For C code one can use `cproto` (<https://invisible-island.net/cproto/cproto.html>; Windows executables are available), for example
```
cproto -I/path/to/R/include -e *.c
```
`ctags` (commonly distributed with the OS) covers C and C++., using something like
```
ctags -x *.c
```
to list all function usages. (The ‘Exuberant’ version allows a lot more control.)
### Extracting Fortran prototypes
`gfortran` 9.2 and later can extract C prototypes for Fortran subroutines with a special flag:
```
gfortran -c -fc-prototypes-external file.f
```
although ironically not for functions declared `bind(C)`.
### Note
This only examines the ‘R’ directory: it will not find e.g. `.Call` calls used directly in examples, tests *etc*.
Static code analysis is used to find the `.C` etc calls: it *will* find those in parts of the **R** code ‘commented out’ by inclusion in `if(FALSE) { ... }`. On the other hand, it will fail to find the entry points in constructs like
```
.Call(if(int) "rle_i" else "rle_d", i, force)
```
and does not know the value of variables in calls like
```
.Call (cfunction, ...)
.Call(..., PACKAGE="sparseLTSEigen")
```
(but if `character_only` is false, will extract the first as `"cfunction"`). Calls which have not been fully resolved will be noted *via* comments in the output file.
Call to entry points in other packages will be ignored if they have an explicit (character string) `PACKAGE` argument.
### See Also
`[package.skeleton](../../utils/html/package.skeleton)`.
### Examples
```
## Not run:
## with a completed splines/DESCRIPTION file,
tools::package_native_routine_registration_skeleton('splines',,,FALSE)
## produces
#include <R.h>
#include <Rinternals.h>
#include <stdlib.h> // for NULL
#include <R_ext/Rdynload.h>
/* FIXME:
Check these declarations against the C/Fortran source code.
*/
/* .Call calls */
extern SEXP spline_basis(SEXP, SEXP, SEXP, SEXP);
extern SEXP spline_value(SEXP, SEXP, SEXP, SEXP, SEXP);
static const R_CallMethodDef CallEntries[] = {
{"spline_basis", (DL_FUNC) &spline_basis, 4},
{"spline_value", (DL_FUNC) &spline_value, 5},
{NULL, NULL, 0}
};
void R_init_splines(DllInfo *dll)
{
R_registerRoutines(dll, NULL, CallEntries, NULL, NULL);
R_useDynamicSymbols(dll, FALSE);
}
## End(Not run)
```
| programming_docs |
r None
`checkPoFiles` Check Translation Files for Inconsistent Format Strings
-----------------------------------------------------------------------
### Description
These functions compare formats embedded in English messages with translated strings to check for consistency. `checkPoFile` checks one file, while `checkPoFiles` checks all files for a specified language.
### Usage
```
checkPoFile(f, strictPlural = FALSE)
checkPoFiles(language, dir = ".")
```
### Arguments
| | |
| --- | --- |
| `f` | a character string giving a single filepath. |
| `strictPlural` | whether to compare formats of singular and plural forms in a strict way. |
| `language` | a character string giving a language code. |
| `dir` | a path to a directory in which to check files. |
### Details
Part of **R**'s internationalization depends on translations of messages in ‘.po’ files. In these files an ‘English’ message taken from the **R** sources is followed by a translation into another language. Many of these messages are format strings for C or **R** `[sprintf](../../base/html/sprintf)` and related functions. In these cases, the translation must give a compatible format or an error will be generated when the message is displayed.
The rules for compatibility differ between C and **R** in several ways. C supports several conversions not supported by **R**, namely `c`, `u`, `p`, `n`. It is allowed in C's `sprintf()` function to have more arguments than are needed by the format string, but in **R** the counts must match exactly. **R** requires types of arguments to match, whereas C will do the display whether it makes sense or not.
These functions compromise on the testing as follows. The additional formats allowed in C are accepted, and all differences in argument type or count are reported. As a consequence some reported differences are not errors.
If the `strictPlural` argument is `TRUE`, then argument lists must agree exactly between singular and plural forms of messages; if `FALSE`, then translations only need to match one or the other of the two forms. When `checkPoFiles` calls `checkPoFile`, the `strictPlural` argument is set to `TRUE` for files with names starting ‘R-’, and to `FALSE` otherwise.
Items marked as ‘fuzzy’ in the ‘.po’ file are not processed (as they are ignored by the message compiler).
If a difference is found, the translated string is checked for variant percent signs (e.g., the wide percent sign `"\uFF05"`). Such signs will not be recognized as format specifiers, and are likely to be errors.
### Value
Both functions return an object of S3 class `"check_po_files"`. A `print` method is defined for this class to display a report on the differences.
### Author(s)
Duncan Murdoch
### References
See the GNU gettext manual for the ‘.po’ file format:
<https://www.gnu.org/software/gettext/manual/gettext.html>.
### See Also
`<update_pkg_po>()` which calls `checkPoFile()`; `<xgettext>`, `[sprintf](../../base/html/sprintf)`.
### Examples
```
## Not run:
checkPoFiles("de", "/path/to/R/src/directory")
## End(Not run)
```
r None
`loadRdMacros` Load User-defined Rd Help System Macros
-------------------------------------------------------
### Description
Loads macros from an ‘.Rd’ file, or from several ‘.Rd’ files contained in a package.
### Usage
```
loadRdMacros(file, macros = TRUE)
loadPkgRdMacros(pkgdir, macros)
```
### Arguments
| | |
| --- | --- |
| `file` | A file in Rd format containing macro definitions. |
| `macros` | `TRUE` or a previous set of macro definitions, in the format expected by the `[parse\_Rd](parse_rd)` `macros` argument. |
| `pkgdir` | The base directory of a source package or an installed package. |
### Details
The files parsed by this function should contain only macro definitions; a warning will be issued if anything else other than comments or white space is found.
The `macros` argument may be a filename of a base set of macros, or the result of a previous call to `loadRdMacros` or `loadPkgRdMacros` in the same session. These results should be assumed to be valid only within the current session.
The `loadPkgRdMacros` function first looks for an `"RdMacros"` entry in the package ‘DESCRIPTION’ file. If present, it should contain a comma-separated list of other package names; their macros will be loaded before those of the current package. It will then look in the current package for ‘.Rd’ files in the ‘man/macros’ or ‘help/macros’ subdirectories, and load those.
### Value
These functions each return an environment containing objects with the names of the newly defined macros from the last file processed. The parent environment will be macros from the previous file, and so on. The first file processed will have `[emptyenv](../../base/html/environment)()` as its parent.
### Author(s)
Duncan Murdoch
### References
See the ‘Writing R Extensions’ manual for the syntax of Rd files, or <https://developer.r-project.org/parseRd.pdf> for a technical discussion.
### See Also
`[parse\_Rd](parse_rd)`
### Examples
```
f <- tempfile()
writeLines(paste0("\\newcommand{\\logo}{\\if{html}{\\figure{Rlogo.svg}{options: width=100}",
"\\if{latex}{\\figure{Rlogo.pdf}{options: width=0.5in}}}"),
f)
m <- loadRdMacros(f)
ls(m)
ls(parent.env(m))
ls(parent.env(parent.env(m)))
```
r None
`Rd2HTML` Rd Converters
------------------------
### Description
These functions take the output of `[parse\_Rd](parse_rd)()`, an `Rd` object, and produce a help page from it. As they are mainly intended for internal use, their interfaces are subject to change.
### Usage
```
Rd2HTML(Rd, out = "", package = "", defines = .Platform$OS.type,
Links = NULL, Links2 = NULL,
stages = "render", outputEncoding = "UTF-8",
dynamic = FALSE, no_links = FALSE, fragment = FALSE,
stylesheet = "R.css", ...)
Rd2txt(Rd, out = "", package = "", defines = .Platform$OS.type,
stages = "render", outputEncoding = "",
fragment = FALSE, options, ...)
Rd2latex(Rd, out = "", defines = .Platform$OS.type,
stages = "render", outputEncoding = "UTF-8",
fragment = FALSE, ..., writeEncoding = TRUE)
Rd2ex(Rd, out = "", defines = .Platform$OS.type,
stages = "render", outputEncoding = "UTF-8",
commentDontrun = TRUE, commentDonttest = FALSE, ...)
```
### Arguments
| | |
| --- | --- |
| `Rd` | a filename or `Rd` object to use as input. |
| `out` | a filename or connection object to which to write the output. The default `out = ""` is equivalent to `out = [stdout](../../base/html/showconnections)()`. |
| `package` | the package to list in the output. |
| `defines` | string(s) to use in `#ifdef` tests. |
| `stages` | at which stage (`"build"`, `"install"`, or `"render"`) should `\Sexpr` macros be executed? See the notes below. |
| `outputEncoding` | see the ‘Encodings’ section below. |
| `dynamic` | logical: set links for render-time resolution by dynamic help system. |
| `no_links` | logical: suppress hyperlinks to other help topics. Used by `R CMD [Rdconv](../../base/html/rdutils)`. |
| `fragment` | logical: should fragments of Rd files be accepted? See the notes below. |
| `stylesheet` | character: a URL for a stylesheet to be used in the header of the HTML output page. |
| `Links, Links2` | `NULL` or a named (by topics) character vector of links, as returned by `[findHTMLlinks](htmllinks)`. |
| `options` | An optional named list of options to pass to `[Rd2txt\_options](rd2txt_options)`. |
| `...` | additional parameters to pass to `[parse\_Rd](parse_rd)` when `Rd` is a filename. |
| `writeEncoding` | should `\inputencoding` lines be written in the file for non-ASCII encodings? |
| `commentDontrun` | should `\dontrun` sections be commented out? |
| `commentDonttest` | should `\donttest` sections be commented out? |
### Details
These functions convert help documents: `Rd2HTML` produces HTML, `Rd2txt` produces plain text, `Rd2latex` produces LaTeX. `Rd2ex` extracts the examples in the format used by `[example](../../utils/html/example)` and **R** utilities.
Each of the functions accepts a filename for an Rd file, and will use `[parse\_Rd](parse_rd)` to parse it before applying the conversions or checks.
The difference between arguments `Link` and `Link2` is that links are looked in them in turn, so lazy-evaluation can be used to only do a second-level search for links if required.
Before **R** 3.6.0, the default for `Rd2latex` was `outputEncoding = "ASCII"`, including using the second option of `\enc` markup, because LaTeX versions did not provide enough coverage of UTF-8 glyphs for a long time.
`Rd2txt` will format text paragraphs to a width determined by `width`, with appropriate margins. The default is to be close to the rendering in versions of **R** < 2.10.0.
`Rd2txt` will use directional quotes (see `[sQuote](../../base/html/squote)`) if option `"useFancyQuotes"` is true (the default) and the current encoding is UTF-8.
Various aspects of formatting by `Rd2txt` are controlled by the `options` argument, documented with the `[Rd2txt\_options](rd2txt_options)` function. Changes made using `options` are temporary, those made with `[Rd2txt\_options](rd2txt_options)` are persistent.
When `fragment = TRUE`, the `Rd` file will be rendered with no processing of `\Sexpr` elements or conditional defines using `#ifdef` or `#ifndef`. Normally a fragment represents text within a section, but if the first element of the fragment is a section macro, the whole fragment will be rendered as a series of sections, without the usual sorting.
### Value
These functions are executed mainly for the side effect of writing the converted help page. Their value is the name of the output file (invisibly). For `Rd2latex`, the output name is given an attribute `"latexEncoding"` giving the encoding of the file in a form suitable for use with the LaTeX inputenc package.
### Encodings
Rd files are normally intended to be rendered on a wide variety of systems, so care must be taken in the encoding of non-ASCII characters. In general, any such encoding should be declared using the encoding section for there to be any hope of correct rendering.
For output, the `outputEncoding` argument will be used: `outputEncoding = ""` will choose the native encoding for the current system.
If the text cannot be converted to the `outputEncoding`, byte substitution will be used (see `[iconv](../../base/html/iconv)`): `Rd2latex` and `Rd2ex` give a warning.
### Note
The `\Sexpr` macro is a new addition to Rd files. It includes **R** code that will be executed at one of three times: *build* time (when a package's source code is built into a tarball), *install* time (when the package is installed or built into a binary package), and *render* time (when the man page is converted to a readable format).
For example, this man page was:
1. built on 2021-05-31 at 21:58:26,
2. installed on 2021-05-31 at 21:58:26, and
3. rendered on 2021-05-31 at 22:08:58.
### Author(s)
Duncan Murdoch, Brian Ripley
### References
<https://developer.r-project.org/parseRd.pdf>
### See Also
`[parse\_Rd](parse_rd)`, `[checkRd](checkrd)`, `[findHTMLlinks](htmllinks)`, `[Rd2txt\_options](rd2txt_options)`.
### Examples
```
## Not run:
## Simulate install and rendering of this page in HTML and text format:
Rd <- file.path("src/library/tools/man/Rd2HTML.Rd")
outfile <- tempfile(fileext = ".html")
browseURL(Rd2HTML(Rd, outfile, package = "tools",
stages = c("install", "render")))
outfile <- tempfile(fileext = ".txt")
file.show(Rd2txt(Rd, outfile, package = "tools",
stages = c("install", "render")))
checkRd(Rd) # A stricter test than Rd2HTML uses
## End(Not run)
```
r None
`dependsOnPkgs` Find Reverse Dependencies
------------------------------------------
### Description
Find ‘reverse’ dependencies of packages, that is those packages which depend on this one, and (optionally) so on recursively.
### Usage
```
dependsOnPkgs(pkgs,
dependencies = "strong",
recursive = TRUE, lib.loc = NULL,
installed =
utils::installed.packages(lib.loc, fields = "Enhances"))
```
### Arguments
| | |
| --- | --- |
| `pkgs` | a character vector of package names. |
| `dependencies` | a character vector listing the types of dependencies, a subset of `c("Depends", "Imports", "LinkingTo", "Suggests", "Enhances")`. Character string `"all"` is shorthand for that vector, character string `"most"` for the same vector without `"Enhances"`, character string `"strong"` (default) for the first three elements of that vector. |
| `recursive` | logical: should reverse dependencies of reverse dependencies (and so on) be included? |
| `lib.loc` | a character vector of **R** library trees, or `NULL` for all known trees (see `[.libPaths](../../base/html/libpaths)`). |
| `installed` | a result of calling `[installed.packages](../../utils/html/installed.packages)`. |
### Value
A character vector of package names, which does not include any from `pkgs`.
### See Also
`<package_dependencies>()` to get the regular (“forward”) dependencies of a package.
### Examples
```
## there are few dependencies in a vanilla R installation:
## lattice may not be installed
dependsOnPkgs("lattice")
```
r None
`add_datalist` Add a ‘datalist’ File to a Source Package
---------------------------------------------------------
### Description
The `[data](../../utils/html/data)()` command with no arguments lists all the datasets available via `data` in attached packages, and to do so a per-package list is installed. Creating that list at install time can be slow for packages with huge datasets, and can be expedited by a supplying ‘data/datalist’ file.
### Usage
```
add_datalist(pkgpath, force = FALSE, small.size = 1024^2)
```
### Arguments
| | |
| --- | --- |
| `pkgpath` | The path to a (source) package. |
| `force` | logical: can an existing ‘data/datalist’ file be over-written? |
| `small.size` | number: a ‘data/datalist’ file is created only if the total size of the data files is larger than `small.size` bytes. |
### Details
`R CMD build` will call this function to add a data list to packages with 1MB or more of file in the ‘data’ directory.
It can also be also helpful to give a ‘data/datalist’ file in packages whose datasets have many dependencies, including loading the packages itself (and maybe others).
### See Also
`[data](../../utils/html/data)`.
The ‘Writing R Extensions’ manual.
r None
`CRANtools` CRAN Package Repository Tools
------------------------------------------
### Description
Tools for obtaining information about current packages in the CRAN package repository, and their check status.
### Usage
```
CRAN_package_db()
CRAN_check_results(flavors = NULL)
CRAN_check_details(flavors = NULL)
CRAN_check_issues()
summarize_CRAN_check_status(packages,
results = NULL,
details = NULL,
issues = NULL)
```
### Arguments
| | |
| --- | --- |
| `packages` | a character vector of package names. |
| `flavors` | a character vector of CRAN check flavor names, or `NULL` (default), corresponding to all available flavors. |
| `results` | the return value of `CRAN_check_results()` (default), or a subset of this. |
| `details` | the return value of `CRAN_check_details()` (default), or a subset of this. |
| `issues` | the return value of `CRAN_check_issues()` (default), or a subset of this. |
### Details
`CRAN_package_db()` returns a character data frame with most ‘DESCRIPTION’ metadata for the current packages in the CRAN package repository, including in particular the Description and Maintainer information not provided by `utils::[available.packages](../../utils/html/available.packages)()`.
`CRAN_check_results()` returns a data frame with the basic CRAN package check results including timings, with columns `Package`, `Flavor` and `Status` giving the package name, check flavor, and overall check status, respectively.
`CRAN_check_details()` returns a data frame inheriting from class `"check_details"` (which has useful `print` and `format` methods) with details on the check results, providing check name, status and output for every non-OK check (*via* columns `Check`, `Status` and `Output`, respectively). Packages with all-OK checks are indicated via a `*` `Check` wildcard name and OK `Status`.
`CRAN_check_issues()` returns a character frame with additional check issues (including the memory-access check results made available from <https://www.stats.ox.ac.uk/pub/bdr/memtests/> and the without-long-double check results from <https://www.stats.ox.ac.uk/pub/bdr/noLD/>), as a character frame with variables `Package`, `Version`, `kind` (an identifier for the issue) and `href` (a URL with information on the issue).
`CRAN_memtest_notes()` is now deprecated, with its functionality integrated into that of `CRAN_check_issues()`.
### Value
See ‘Details’. Note that the results are collated on CRAN: currently this is done in a locale which sorts `aAbB` ....
### Which CRAN?
The main functions access a CRAN mirror specified by the environment variable R\_CRAN\_WEB, defaulting to one specified in the ‘repositories’ file (see `[setRepositories](../../utils/html/setrepositories)`): if that specifies `@CRAN@` (the default) then <https://CRAN.R-project.org> is used. (Note that `[options](../../base/html/options)("repos")` is not consulted.)
Note that these access parts of CRAN under ‘web/contrib’ and ‘web/packages’ so if you have specified a mirror of just ‘src/contrib’ for installing packages you will need to set R\_CRAN\_WEB to point to a full mirror.
### Examples
```
## This can be rather slow, especially with a non-local CRAN mirror
## and would fail (slowly) without Internet access in that case.
set.seed(11) # but the packages chosen will change as soon as CRAN does.
pdb <- CRAN_package_db()
dim(pdb)
## DESCRIPTION fields included:
colnames(pdb)
## Summarize publication dates:
summary(as.Date(pdb$Published))
## Summarize numbers of packages according to maintainer:
summary(lengths(split(pdb$Package, pdb$Maintainer)))
## Packages with 'LASSO' in their Description:
pdb$Package[grepl("LASSO", pdb$Description)]
results <- CRAN_check_results()
## Available variables:
names(results)
## Tabulate overall check status according to flavor:
with(results, table(Flavor, Status))
details <- CRAN_check_details()
## Available variables:
names(details)
## Tabulate checks according to their status:
tab <- with(details, table(Check, Status))
## Inspect some installation problems:
bad <- subset(details,
((Check == "whether package can be installed") &
(Status != "OK")))
## Show a random sample of up to 6
head(bad[sample(seq_len(NROW(bad)), NROW(bad)), ])
issues <- CRAN_check_issues()
head(issues)
## Show counts of issues according to kind:
table(issues[, "kind"])
## Summarize CRAN check status for 10 randomly-selected packages
## (reusing the information already read in):
pos <- sample(seq_len(NROW(pdb)), 10L)
summarize_CRAN_check_status(pdb[pos, "Package"],
results, details, issues)
```
r None
`showNonASCII` Pick Out Non-ASCII Characters
---------------------------------------------
### Description
This function prints elements of a character vector which contain non-ASCII bytes, printing such bytes as a escape like <fc>.
### Usage
```
showNonASCII(x)
showNonASCIIfile(file)
```
### Arguments
| | |
| --- | --- |
| `x` | a character vector. |
| `file` | path to a file. |
### Details
This was originally written to help detect non-portable text in files in packages.
It prints all element of `x` which contain non-ASCII characters, preceded by the element number and with non-ASCII bytes highlighted *via* `[iconv](../../base/html/iconv)(sub = "byte")`.
### Value
The elements of `x` containing non-ASCII characters will be returned invisibly.
### Examples
```
out <- c(
"fa\xE7ile test of showNonASCII():",
"\\details{",
" This is a good line",
" This has an \xfcmlaut in it.",
" OK again.",
"}")
f <- tempfile()
cat(out, file = f, sep = "\n")
showNonASCIIfile(f)
unlink(f)
```
| programming_docs |
r None
`toTitleCase` Convert Titles to Title Case
-------------------------------------------
### Description
Convert a character vector to title case, especially package titles.
### Usage
```
toTitleCase(text)
```
### Arguments
| | |
| --- | --- |
| `text` | a character vector. |
### Details
This is intended for English text only.
No definition of‘title case’ is universally accepted: all agree that ‘principal’ words are capitalized and common words like ‘for’ are not, but not which words fall into each category.
Generally words in all capitals are left alone: this implementation knows about conventional mixed-case words such as ‘LaTeX’ and ‘OpenBUGS’ and a few technical terms which are not usually capitalized such as ‘jar’ and ‘xls’. However, unknown technical terms will be capitalized unless they are single words enclosed in single quotes: names of packages and libraries should be quoted in titles.
### Value
A character vector of the same length as `text`, without names.
r None
`bibstyle` Select or Define a Bibliography Style
-------------------------------------------------
### Description
This function defines and registers styles for rendering `[bibentry](../../utils/html/bibentry)` objects into ‘Rd’ format, for later conversion to text, HTML, etc.
### Usage
```
bibstyle(style, envir, ..., .init = FALSE, .default = TRUE)
getBibstyle(all = FALSE)
```
### Arguments
| | |
| --- | --- |
| `style` | A character string naming the style. |
| `envir` | (optional) An environment holding the functions to implement the style. |
| `...` | Named arguments to add to the environment. |
| `.init` | Whether to initialize the environment from the default style `"JSS"`. |
| `.default` | Whether to set the specified style as the default style. |
| `all` | Whether to return the names of all registered styles. |
### Details
Rendering of `[bibentry](../../utils/html/bibentry)` objects may be done using routines modelled after those used by BibTeX. This function allows environments to be created and manipulated to contain those routines.
There are two ways to create a new style environment. The easiest is to set `.init = TRUE`, in which case the environment will be initialized with a copy of the default `"JSS"` environment. (This style is modelled after the ‘jss.bst’ style used by the *Journal of Statistical Software*.) Alternatively, the `envir` argument can be used to specify a completely new style environment.
To find the name of the default style, use `getBibstyle()`. To retrieve an existing style without setting it as the default, use `bibstyle(style, .default = FALSE)`. To modify an existing style, specify `style` and some named entries via `...`. (Modifying the default `"JSS"` style is discouraged.) Setting `style` to `NULL` or leaving it missing will retrieve the default style, but modifications will not be allowed.
At a minimum, the environment should contain routines to render each of the 12 types of bibliographic entry supported by `[bibentry](../../utils/html/bibentry)` as well as several other routines described below. The former must be named `formatArticle`, `formatBook`, `formatInbook`, `formatIncollection`, `formatInProceedings`, `formatManual`, `formatMastersthesis`, `formatMisc`, `formatPhdthesis`, `formatProceedings`, `formatTechreport` and `formatUnpublished`. Each of these takes one argument, a single `[unclass](../../base/html/class)`'ed entry from the `[bibentry](../../utils/html/bibentry)` vector passed to the renderer, and should produce a single element character vector (possibly containing newlines).
The other routines are as follows. `sortKeys`, a function to produce a sort key to sort the entries, is passed the original `[bibentry](../../utils/html/bibentry)` vector and should produce a sortable vector of the same length to define the sort order. Finally, the optional function `cite` should have the same argument list as `utils::[cite](../../utils/html/cite)`, and should produce a citation to be used in text.
The `[format](../../base/html/format)` method for `"bibentry"` objects adds a field named `".index"` to each entry after sorting and before formatting. This is a 1-based index within the complete object that can be used in styles that require numbering. Although the `"JSS"` style doesn't use numbers, it includes a `fmtPrefix()` stub function that may be used to display them. See the example below.
### Value
`bibstyle` returns the environment which has been selected or created.
`getBibstyle` returns the name of the default style, or all style names.
### Author(s)
Duncan Murdoch
### See Also
`[bibentry](../../utils/html/bibentry)`
### Examples
```
refs <-
c(bibentry(bibtype = "manual",
title = "R: A Language and Environment for Statistical Computing",
author = person("R Core Team"),
organization = "R Foundation for Statistical Computing",
address = "Vienna, Austria",
year = 2013,
url = "https://www.R-project.org"),
bibentry(bibtype = "article",
author = c(person(c("George", "E.", "P."), "Box"),
person(c("David", "R."), "Cox")),
year = 1964,
title = "An Analysis of Transformations",
journal = "Journal of the Royal Statistical Society, Series B",
volume = 26,
pages = "211-252"))
bibstyle("unsorted", sortKeys = function(refs) seq_along(refs),
fmtPrefix = function(paper) paste0("[", paper$.index, "]"),
.init = TRUE)
print(refs, .bibstyle = "unsorted")
```
r None
`getVignetteInfo` Get Information on Installed Vignettes
---------------------------------------------------------
### Description
This function gets information on installed vignettes.
### Usage
```
getVignetteInfo(package = NULL, lib.loc = NULL, all = TRUE)
```
### Arguments
| | |
| --- | --- |
| `package` | Which package to look in, or `NULL` for all packages. |
| `lib.loc` | Which library to look in. |
| `all` | Whether to search all installed packages, or just attached packages. |
### Value
A matrix with columns
| | |
| --- | --- |
| `Package` | the name of the package |
| `Dir` | the directory where the package is installed |
| `Topic` | the name of the vignette |
| `File` | the base filename of the source of the vignette |
| `Title` | the title of the vignette |
| `R` | the tangled R source from the vignette |
| `PDF` | the PDF or HTML file for display |
### Note
The last column of the result is named `PDF` for historical reasons, but it may contain a filename of a PDF or HTML document.
### See Also
`[pkgVignettes](buildvignettes)` is a similar function that can work on an uninstalled package.
### Examples
```
getVignetteInfo("grid")
```
r None
`checkRdaFiles` Report on Details of Saved Images or Re-saves them
-------------------------------------------------------------------
### Description
This reports for each of the files produced by `save` the size, if it was saved in ASCII or XDR binary format, and if it was compressed (and if so in what format).
Usually such files have extension ‘.rda’ or ‘.RData’, hence the name of the function.
### Usage
```
checkRdaFiles(paths)
resaveRdaFiles(paths, compress = c("auto", "gzip", "bzip2", "xz"),
compression_level, version = NULL)
```
### Arguments
| | |
| --- | --- |
| `paths` | A character vector of paths to `save` files. If this specifies a single directory, it is taken to refer to all ‘.rda’ and ‘.RData’ files in that directory. |
| `compress, compression_level` | Type and level of compression: see `[save](../../base/html/save)`. Values of `compress` can be abbreviated. |
| `version` | The format to be used when re-saving: see `[save](../../base/html/save)`. |
### Details
`compress = "auto"` asks **R** to choose the compression and ignores `compression_level`. It will try `"gzip"`, `"bzip2"` and if the `"gzip"` compressed size is over 10Kb, `"xz"` and choose the smallest compressed file (but with a 10% bias towards `"gzip"`). This can be slow.
For back-compatibility, `version = NULL` is interpreted to mean version 2: however version-3 files will only be saved as version 3.
### Value
For `checkRdaFiles`, a data frame with rows names `paths` and columns
| | |
| --- | --- |
| `size` | numeric: file size in bytes, `NA` if the file does not exist. |
| `ASCII` | logical: true for save(ASCII = TRUE), `NA` if the format is not that of an **R** save file. |
| `compress` | character: type of compression. One of `"gzip"`, `"bzip2"`, `"xz"`, `"none"` or `"unknown"` (which means that if this is an **R** save file it is from a later version of **R**). |
| `version` | integer: positive with the version(s) of the `[save](../../base/html/save)()`, see there on which versions have been default in which versions of **R**, and `NA` for non-Rda files. |
### Examples
```
## Not run:
## from a package top-level source directory
paths <- sort(Sys.glob(c("data/*.rda", "data/*.RData")))
(res <- checkRdaFiles(paths))
## pick out some that may need attention
bad <- is.na(res$ASCII) | res$ASCII | (res$size > 1e4 & res$compress == "none")
res[bad, ]
## End(Not run)
```
r None
`vignetteEngine` Set or Get a Vignette Processing Engine
---------------------------------------------------------
### Description
Vignettes are normally processed by `[Sweave](../../utils/html/sweave)`, but package writers may choose to use a different engine (e.g., one provided by the [knitr](https://CRAN.R-project.org/package=knitr), [noweb](https://CRAN.R-project.org/package=noweb) or [R.rsp](https://CRAN.R-project.org/package=R.rsp) packages). This function is used by those packages to register their engines, and internally by **R** to retrieve them.
### Usage
```
vignetteEngine(name, weave, tangle, pattern = NULL,
package = NULL, aspell = list())
```
### Arguments
| | |
| --- | --- |
| `name` | the name of the engine. |
| `weave` | a function to convert vignette source files to PDF/HTML or intermediate LaTeX output. |
| `tangle` | a function to convert vignette source files to **R** code. |
| `pattern` | a regular expression pattern for the filenames handled by this engine, or `NULL` for the default pattern. |
| `package` | the package registering the engine. By default, this is the package calling `vignetteEngine`. |
| `aspell` | a list with element names `filter` and/or `control` giving the respective arguments to be used when spell checking the text in the vignette source file with `[aspell](../../utils/html/aspell)`. |
### Details
If `weave` is missing, `vignetteEngine` will return the currently registered engine matching `name` and `package`.
If `weave` is `NULL`, the specified engine will be deleted.
Other settings define a new engine. The `weave` and `tangle` functions must be defined with argument lists compatible with `function(file, ...)`. Currently the `...` arguments may include logical argument `quiet` and character argument `encoding`; others may be added in future. These are described in the documentation for `[Sweave](../../utils/html/sweave)` and `[Stangle](../../utils/html/sweave)`.
The `weave` and `tangle` functions should return the filename of the output file that has been produced. Currently the `weave` function, when operating on a file named ‘<name><pattern>’ must produce a file named ‘<name>[.](tex|pdf|html)’. The ‘.tex’ files will be processed by `pdflatex` to produce ‘.pdf’ output for display to the user; the others will be displayed as produced. The `tangle` function must produce a file named ‘<name>[.][rRsS]’ containing the executable **R** code from the vignette. The `tangle` function may support a `split = TRUE` argument, and then it should produce files named ‘<name>.\*[.][rRsS]’.
The `pattern` argument gives a regular expression to match the extensions of files which are to be processed as vignette input files. If set to `NULL`, the default pattern `"[.][RrSs](nw|tex)$"` is used.
### Value
If the engine is being deleted, `NULL`. Otherwise a list containing components
| | |
| --- | --- |
| `name` | The name of the engine |
| `package` | The name of its package |
| `pattern` | The pattern for vignette input files |
| `weave` | The weave function |
| `tangle` | The tangle function |
### Author(s)
Duncan Murdoch and Henrik Bengtsson.
### See Also
`[Sweave](../../utils/html/sweave)` and the ‘Writing R Extensions’ manual.
### Examples
```
str(vignetteEngine("Sweave"))
```
r None
`checkMD5sums` Check and Create MD5 Checksum Files
---------------------------------------------------
### Description
`checkMD5sums` checks the files against a file ‘MD5’.
### Usage
```
checkMD5sums(package, dir)
```
### Arguments
| | |
| --- | --- |
| `package` | the name of an installed package |
| `dir` | the path to the top-level directory of an installed package. |
### Details
The file ‘MD5’ which is created is in a format which can be checked by `md5sum -c MD5` if a suitable command-line version of `md5sum` is available. (For Windows, one is supplied in the bundle at <https://cran.r-project.org/bin/windows/Rtools/>.)
If `dir` is missing, an installed package of name `package` is searched for.
The private function `tools:::.installMD5sums` is used to create `MD5` files in the Windows build.
### Value
`checkMD5sums` returns a logical, `NA` if there is no ‘MD5’ file to be checked.
### See Also
`<md5sum>`
r None
`Rd2txt_options` Set Formatting Options for Text Help
------------------------------------------------------
### Description
This function sets various options for displaying text help.
### Usage
```
Rd2txt_options(...)
```
### Arguments
| | |
| --- | --- |
| `...` | A list containing named options, or options passed as individual named arguments. See below for currently defined ones. |
### Details
This function persistently sets various formatting options for the `[Rd2txt](rd2html)` function which is used in displaying text format help. Currently defined options are:
width
(default 80): The width of the output page.
minIndent
(default 10): The minimum indent to use in a list.
extraIndent
(default 4): The extra indent to use in each level of nested lists.
sectionIndent
(default 5): The indent level for a section.
sectionExtra
(default 2): The extra indentation for each nested section level.
itemBullet
(default `"* "`, with the asterisk replaced by a Unicode bullet in UTF-8 and most Windows locales): The symbol to use as a bullet in itemized lists.
enumFormat
: A function to format item numbers in enumerated lists.
showURLs
(default `FALSE`): Whether to show URLs when expanding `\href` tags.
code\_quote
(default `TRUE`): Whether to render `\code` and similar with single quotes.
underline\_titles
(default `TRUE`): Whether to render section titles with underlines (via backspacing).
### Value
If called with no arguments, returns all option settings in a list. Otherwise, it changes the named settings and invisibly returns their previous values.
### Author(s)
Duncan Murdoch
### See Also
`[Rd2txt](rd2html)`
### Examples
```
# The itemBullet is locale-specific
saveOpts <- Rd2txt_options()
saveOpts
Rd2txt_options(minIndent = 4)
Rd2txt_options()
Rd2txt_options(saveOpts)
Rd2txt_options()
```
r None
`texi2dvi` Compile LaTeX Files
-------------------------------
### Description
Run `latex`/`pdflatex`, `makeindex` and `bibtex` until all cross-references are resolved to create a dvi or a PDF file.
### Usage
```
texi2dvi(file, pdf = FALSE, clean = FALSE, quiet = TRUE,
texi2dvi = getOption("texi2dvi"),
texinputs = NULL, index = TRUE)
texi2pdf(file, clean = FALSE, quiet = TRUE,
texi2dvi = getOption("texi2dvi"),
texinputs = NULL, index = TRUE)
```
### Arguments
| | |
| --- | --- |
| `file` | character string. Name of the LaTeX source file. |
| `pdf` | logical. If `TRUE`, a PDF file is produced instead of the default dvi file (`texi2dvi` command line option --pdf). |
| `clean` | logical. If `TRUE`, all auxiliary files created during the conversion are removed. |
| `quiet` | logical. No output unless an error occurs. |
| `texi2dvi` | character string (or `NULL`). Script or program used to compile a TeX file to dvi or PDF. The default (selected by `""` or `"texi2dvi"` or `NULL`) is to look for a program or script named ‘texi2dvi’ on the path and otherwise emulate the script with `system2` calls (which can be selected by the value `"emulation"`). See also ‘Details’. |
| `texinputs` | `NULL` or a character vector of paths to add to the LaTeX and bibtex input search paths. |
| `index` | logical: should indices be prepared? |
### Details
`texi2pdf` is a wrapper for the common case of `texi2dvi(pdf = TRUE)`.
Despite the name, this is used in **R** to compile LaTeX files, specifically those generated from vignettes and by the `[Rd2pdf](../../base/html/rdutils)` script (used for package reference manuals). It ensures that the ‘[R\_HOME](../../base/html/rhome)/share/texmf’ directory is in the TEXINPUTS path, so **R** style files such as ‘Sweave’ and ‘Rd’ will be found. The TeX search path used is first the existing TEXINPUTS setting (or the current directory if unset), then elements of argument `texinputs`, then ‘R\_HOME/share/texmf’ and finally the default path. Analogous changes are made to BIBINPUTS and BSTINPUTS settings.
The default option for `texi2dvi` is set from environment variable R\_TEXI2DVICMD, and the default for that is set from environment variable TEXI2DVI or if that is unset, from a value chosen when **R** is configured.
A shell script `texi2dvi` is part of GNU's texinfo. Several issues have been seen with released versions, so if yours does not work correctly try R\_TEXI2DVICMD=emulation.
Occasionally indices contain special characters which cause indexing to fail (particularly when using the hyperref LaTeX package) even on valid input. The argument `index = FALSE` is provided to allow package manuals to be made when this happens: it uses emulation.
### Value
Invisible `NULL`. Used for the side effect of creating a dvi or PDF file in the current working directory (and maybe other files, especially if `clean = FALSE`).
### Note
There are various versions of the `texi2dvi` script on Unix-alikes and quite a number of bugs have been seen, some of which this **R** wrapper works around.
One that was present with `texi2dvi` version `4.8` (as supplied by macOS) is that it will not work correctly for paths which contain spaces, nor if the absolute path to a file would contain spaces.
The three possible approaches all have their quirks. For example the Unix-alike `texi2dvi` script removes ancillary files that already exist but the other two approaches do not (and may get confused by such files).
Where supported (`texi2dvi` 5.0 and later; `texify.exe` from MiKTeX), option --max-iterations=20 is used to avoid infinite retries.
The emulation mode supports `quiet = TRUE` from **R** 3.2.3 only. Currently `clean = TRUE` only cleans up in this mode if the conversion was successful—this gives users a chance to examine log files in the event of error.
All the approaches should respect the values of environment variables LATEX, PDFLATEX, MAKEINDEX and BIBTEX for the full paths to the corresponding commands.
### Author(s)
Originally Achim Zeileis but largely rewritten by R-core.
r None
`HTMLlinks` Collect HTML Links from Package Documentation
----------------------------------------------------------
### Description
Compute relative file paths for URLs to other package's installed HTML documentation.
### Usage
```
findHTMLlinks(pkgDir = "", lib.loc = NULL, level = 0:2)
```
### Arguments
| | |
| --- | --- |
| `pkgDir` | the top-level directory of an installed package. The default indicates no package. |
| `lib.loc` | character vector describing the location of **R** library trees to scan: the default indicates `[.libPaths](../../base/html/libpaths)()`. |
| `level` | Which level(s) to include. |
### Details
`findHTMLlinks` tries to resolve links from one help page to another. It uses in decreasing priority
* The package in `pkgDir`: this is used when converting HTML help for that package (level 0).
* The base and recommended packages (level 1).
* Other packages found in the library trees specified by `lib.loc` in the order of the trees and alphabetically within a library tree (level 2).
### Value
A named character vector of file paths, relative to the ‘html’ directory of an installed package. So these are of the form ‘"../../somepkg/html/sometopic.html"’.
### Author(s)
Duncan Murdoch, Brian Ripley
| programming_docs |
r None
`compactPDF` Compact PDF Files
-------------------------------
### Description
Re-save PDF files (especially vignettes) more compactly. Support function for `R CMD build --compact-vignettes`.
### Usage
```
compactPDF(paths,
qpdf = Sys.which(Sys.getenv("R_QPDF", "qpdf")),
gs_cmd = Sys.getenv("R_GSCMD", ""),
gs_quality = Sys.getenv("GS_QUALITY", "none"),
gs_extras = character())
## S3 method for class 'compactPDF'
format(x, ratio = 0.9, diff = 1e4, ...)
```
### Arguments
| | |
| --- | --- |
| `paths` | A character vector of paths to PDF files, or a length-one character vector naming a directory, when all ‘.pdf’ files in that directory will be used. |
| `qpdf` | Character string giving the path to the `qpdf` command. If empty, `qpdf` will not be used. |
| `gs_cmd` | Character string giving the path to the GhostScript executable, if that is to be used. On Windows this is the path to ‘gswin32c.exe’ or ‘gswin64c.exe’. If `""` (the default), the function will try to find a platform-specific path to GhostScript where required. |
| `gs_quality` | A character string indicating the quality required: the options are `"none"` (so GhostScript is not used), `"printer"` (300dpi), `"ebook"` (150dpi) and `"screen"` (72dpi). Can be abbreviated. |
| `gs_extras` | An optional character vector of further options to be passed to GhostScript. |
| `x` | An object of class `"compactPDF"`. |
| `ratio, diff` | Limits for reporting: files are only reported whose sizes are reduced both by a factor of `ratio` and by `diff` bytes. |
| `...` | Further arguments to be passed to or from other methods. |
### Details
This by default makes use of `qpdf`, available from <http://qpdf.sourceforge.net/> (including as a Windows binary) and included with the CRAN macOS distribution of **R**. If `gs_cmd` is non-empty and `gs_quality != "none"`, GhostScript will used first, then `qpdf` if it is available. If `gs_quality != "none"` and `gs_cmd` is `""`, an attempt will be made to find a GhostScript executable.
`qpdf` and/or `gs_cmd` are run on all PDF files found, and those which are reduced in size by at least 10% and 10Kb are replaced.
The strategy of our use of `qpdf` is to (losslessly) compress both PDF streams and objects. GhostScript compresses streams and more (including downsampling and compressing embedded images) and consequently is much slower and may lose quality (but can also produce much smaller PDF files). However, quality `"ebook"` is perfectly adequate for screen viewing and printing on laser printers.
Where PDF files are changed they will become PDF version 1.5 files: these have been supported by Acrobat Reader since version 6 in 2003, so this is very unlikely to cause difficulties.
Stream compression is what most often has large gains. Most PDF documents are generated with object compression, but this does not seem to be the default for MiKTeX's `pdflatex`. For some PDF files (and especially package vignettes), using GhostScript can dramatically reduce the space taken by embedded images (often screenshots).
Where both GhostScript and `qpdf` are selected (when `gs_quality != "none"` and both executables are found), they are run in that order and the size reductions apply to the total compression achieved.
### Value
An object of class `c("compactPDF", "data.frame")`. This has two columns, the old and new sizes in bytes for the files that were changed.
There are `format` and `print` methods: the latter passes `...` to the format method, so will accept `ratio` and `diff` arguments.
### Note
The external tools used may change in future releases.
Versions of GhostScript 9.06 and later give several times better compression than 9.05 on some vignettes in CRAN packages.
### See Also
`[resaveRdaFiles](checkrdafiles)`.
Many other (and sometimes more effective) tools to compact PDF files are available, including Adobe Acrobat (not Reader). See the ‘Writing R Extensions’ manual.
r None
`vignetteInfo` Basic Information about a Vignette
--------------------------------------------------
### Description
Provide basic information including package and dependency of a vignette from its source file.
### Usage
```
vignetteInfo(file)
```
### Arguments
| | |
| --- | --- |
| `file` | file name of the vignette. |
### Value
a `[list](../../base/html/list)` with components, each a possibly empty `[character](../../base/html/character)`:
| | |
| --- | --- |
| `file` | the `[basename](../../base/html/basename)` of the file. |
| `title` | the vignette title. |
| `depends` | the package dependencies. |
| `keywords` | keywords if provided. |
| `engine` | the vignette engine such as `"Sweave"`, `"knitr"`, etc. |
### Note
`vignetteInfo(file)$depends` is a substitute for the deprecated `vignetteDepends()` functionality.
### See Also
`<package_dependencies>`
### Examples
```
## This may not be installed, as it requires lattice
gridEx <- system.file("doc", "grid.Rnw", package = "grid")
vi <- vignetteInfo(gridEx)
str(vi)
```
r None
`initialize-methods` Methods to Initialize New Objects from a Class
--------------------------------------------------------------------
### Description
The arguments to function `<new>` to create an object from a particular class can be interpreted specially for that class, by the definition of a method for function `initialize` for the class. This documentation describes some existing methods, and also outlines how to write new ones.
### Methods
`signature(.Object = "ANY")`
The default method for `initialize` takes either named or unnamed arguments. Argument names must be the names of slots in this class definition, and the corresponding arguments must be valid objects for the slot (that is, have the same class as specified for the slot, or some superclass of that class). If the object comes from a superclass, it is not coerced strictly, so normally it will retain its current class (specifically, `<as>(object, Class, strict = FALSE)`).
Unnamed arguments must be objects of this class, of one of its superclasses, or one of its subclasses (from the class, from a class this class extends, or from a class that extends this class). If the object is from a superclass, this normally defines some of the slots in the object. If the object is from a subclass, the new object is that argument, coerced to the current class.
Unnamed arguments are processed first, in the order they appear. Then named arguments are processed. Therefore, explicit values for slots always override any values inferred from superclass or subclass arguments.
`signature(.Object = "traceable")`
Objects of a class that extends `traceable` are used to implement debug tracing (see class [traceable](traceclasses) and `[trace](../../base/html/trace)`).
The `initialize` method for these classes takes special arguments `def, tracer, exit, at, print`. The first of these is the object to use as the original definition (e.g., a function). The others correspond to the arguments to `[trace](../../base/html/trace)`.
`signature(.Object = "environment")`, `signature(.Object = ".environment")`
The `initialize` method for environments takes a named list of objects to be used to initialize the environment. Subclasses of `"environment"` inherit an initialize method through `".environment"`, which has the additional effect of allocating a new environment. If you define your own method for such a subclass, be sure either to call the existing method via `[callNextMethod](nextmethod)` or allocate an environment in your method, since environments are references and are not duplicated automatically.
`signature(.Object = "signature")`
This is a method for internal use only. It takes an optional `functionDef` argument to provide a generic function with a `signature` slot to define the argument names. See [Methods\_Details](methods_details) for details.
### Writing Initialization Methods
Initialization methods provide a general mechanism corresponding to generator functions in other languages.
The arguments to `[initialize](new)` are `.Object` and .... Nearly always, `initialize` is called from `new`, not directly. The `.Object` argument is then the prototype object from the class.
Two techniques are often appropriate for `initialize` methods: special argument names and `callNextMethod`.
You may want argument names that are more natural to your users than the (default) slot names. These will be the formal arguments to your method definition, in addition to `.Object` (always) and ... (optionally). For example, the method for class `"traceable"` documented above would be created by a call to `[setMethod](setmethod)` of the form:
```
setMethod("initialize", "traceable",
function(.Object, def, tracer, exit, at, print) { .... }
)
```
In this example, no other arguments are meaningful, and the resulting method will throw an error if other names are supplied.
When your new class extends another class, you may want to call the initialize method for this superclass (either a special method or the default). For example, suppose you want to define a method for your class, with special argument `x`, but you also want users to be able to set slots specifically. If you want `x` to override the slot information, the beginning of your method definition might look something like this:
```
function(.Object, x, ...) {
Object <- callNextMethod(.Object, ...)
if(!missing(x)) { # do something with x
```
You could also choose to have the inherited method override, by first interpreting `x`, and then calling the next method.
r None
`setGroupGeneric` Create a Group Generic Version of a Function
---------------------------------------------------------------
### Description
The `setGroupGeneric` function behaves like `[setGeneric](setgeneric)` except that it constructs a group generic function, differing in two ways from an ordinary generic function. First, this function cannot be called directly, and the body of the function created will contain a stop call with this information. Second, the group generic function contains information about the known members of the group, used to keep the members up to date when the group definition changes, through changes in the search list or direct specification of methods, etc.
All members of the group must have the identical argument list.
### Usage
```
setGroupGeneric(name, def= , group=list(), valueClass=character(),
knownMembers=list(), package= , where= )
```
### Arguments
| | |
| --- | --- |
| `name` | the character string name of the generic function. |
| `def` | A function object. There isn't likely to be an existing nongeneric of this name, so some function needs to be supplied. Any known member or other function with the same argument list will do, because the group generic cannot be called directly. |
| `group, valueClass` | arguments to pass to `[setGeneric](setgeneric)`. |
| `knownMembers` | the names of functions that are known to be members of this group. This information is used to reset cached definitions of the member generics when information about the group generic is changed. |
| `package, where` | passed to `[setGeneric](setgeneric)`, but obsolete and to be avoided. |
### Value
The `setGroupGeneric` function exists for its side effect: saving the generic function to allow methods to be specified later. It returns `name`.
### References
Chambers, John M. (2016) *Extending R* Chapman & Hall
### See Also
`[Methods\_Details](methods_details)` and the links there for a general discussion, `[dotsMethods](dotsmethods)` for methods that dispatch on `...`, and `[setMethod](setmethod)` for method definitions.
### Examples
```
## Not run:
## the definition of the "Logic" group generic in the methods package
setGroupGeneric("Logic", function(e1, e2) NULL,
knownMembers = c("&", "|"))
## End(Not run)
```
r None
`Documentation` Using and Creating On-line Documentation for Classes and Methods
---------------------------------------------------------------------------------
### Description
Special documentation can be supplied to describe the classes and methods that are created by the software in the methods package. Techniques to access this documentation and to create it in R help files are described here.
### Getting documentation on classes and methods
You can ask for on-line help for class definitions, for specific methods for a generic function, and for general discussion of methods for a generic function. These requests use the `?` operator (see `[help](../../utils/html/help)` for a general description of the operator). Of course, you are at the mercy of the implementer as to whether there *is* any documentation on the corresponding topics.
Documentation on a class uses the argument `class` on the left of the `?`, and the name of the class on the right; for example,
`class ? genericFunction`
to ask for documentation on the class `"genericFunction"`.
When you want documentation for the methods defined for a particular function, you can ask either for a general discussion of the methods or for documentation of a particular method (that is, the method that would be selected for a particular set of actual arguments).
Overall methods documentation is requested by calling the `?` operator with `methods` as the left-side argument and the name of the function as the right-side argument. For example,
`methods ? initialize`
asks for documentation on the methods for the `[initialize](new)` function.
Asking for documentation on a particular method is done by giving a function call expression as the right-hand argument to the `"?"` operator. There are two forms, depending on whether you prefer to give the class names for the arguments or expressions that you intend to use in the actual call.
If you planned to evaluate a function call, say `myFun(x, sqrt(wt))` and wanted to find out something about the method that would be used for this call, put the call on the right of the `"?"` operator:
`?myFun(x, sqrt(wt))`
A method will be selected, as it would be for the call itself, and documentation for that method will be requested. If `myFun` is not a generic function, ordinary documentation for the function will be requested.
If you know the actual classes for which you would like method documentation, you can supply these explicitly in place of the argument expressions. In the example above, if you want method documentation for the first argument having class `"maybeNumber"` and the second `"logical"`, call the `"?"` operator, this time with a left-side argument `method`, and with a function call on the right using the class names as arguments:
`method ? myFun("maybeNumber", "logical")`
Once again, a method will be selected, this time corresponding to the specified classes, and method documentation will be requested. This version only works with generic functions.
The two forms each have advantages. The version with actual arguments doesn't require you to figure out (or guess at) the classes of the arguments. On the other hand, evaluating the arguments may take some time, depending on the example. The version with class names does require you to pick classes, but it's otherwise unambiguous. It has a subtler advantage, in that the classes supplied may be virtual classes, in which case no actual argument will have specifically this class. The class `"maybeNumber"`, for example, might be a class union (see the example for `[setClassUnion](setclassunion)`).
In either form, methods will be selected as they would be in actual computation, including use of inheritance and group generic functions. See `[selectMethod](getmethod)` for the details, since it is the function used to find the appropriate method.
### Writing Documentation for Methods
The on-line documentation for methods and classes uses some extensions to the R documentation format to implement the requests for class and method documentation described above. See the document *Writing R Extensions* for the available markup commands (you should have consulted this document already if you are at the stage of documenting your software).
In addition to the specific markup commands to be described, you can create an initial, overall file with a skeleton of documentation for the methods defined for a particular generic function:
`promptMethods("myFun")`
will create a file, ‘myFun-methods.Rd’ with a skeleton of documentation for the methods defined for function `myFun`. The output from `promptMethods` is suitable if you want to describe all or most of the methods for the function in one file, separate from the documentation of the generic function itself. Once the file has been filled in and moved to the ‘man’ subdirectory of your source package, requests for methods documentation will use that file, both for specific methods documentation as described above, and for overall documentation requested by
`methods ? myFun`
You are not required to use `promptMethods`, and if you do, you may not want to use the entire file created:
* If you want to document the methods in the file containing the documentation for the generic function itself, you can cut-and-paste to move the `\alias` lines and the `Methods` section from the file created by `promptMethods` to the existing file.
* On the other hand, if these are auxiliary methods, and you only want to document the added or modified software, you should strip out all but the relevant `\alias` lines for the methods of interest, and remove all but the corresponding `\item` entries in the `Methods` section. Note that in this case you will usually remove the first `\alias` line as well, since that is the marker for general methods documentation on this function (in the example, \alias{myfun-methods}).
If you simply want to direct documentation for one or more methods to a particular R documentation file, insert the appropriate alias.
r None
`LanguageClasses` Classes to Represent Unevaluated Language Objects
--------------------------------------------------------------------
### Description
The virtual class `"language"` and the specific classes that extend it represent unevaluated objects, as produced for example by the parser or by functions such as `[quote](../../base/html/substitute)`.
### Usage
```
### each of these classes corresponds to an unevaluated object
### in the S language.
### The class name can appear in method signatures,
### and in a few other contexts (such as some calls to as()).
"("
"<-"
"call"
"for"
"if"
"repeat"
"while"
"name"
"{"
### Each of the classes above extends the virtual class
"language"
```
### Objects from the Class
`"language"` is a virtual class; no objects may be created from it.
Objects from the other classes can be generated by a call to `new(Class, ...)`, where `Class` is the quoted class name, and the ... arguments are either empty or a *single* object that is from this class (or an extension).
### Methods
coerce
`signature(from = "ANY", to = "call")`. A method exists for `as(object, "call")`, calling `as.call()`.
### Examples
```
showClass("language")
is( quote(sin(x)) ) # "call" "language"
(ff <- new("if")) ; is(ff) # "if" "language"
(ff <- new("for")) ; is(ff) # "for" "language"
```
r None
`classesToAM` Compute an Adjacency Matrix for Superclasses of Class Definitions
--------------------------------------------------------------------------------
### Description
Given a vector of class names or a list of class definitions, the function returns an adjacency matrix of the superclasses of these classes; that is, a matrix with class names as the row and column names and with element [i, j] being 1 if the class in column j is a direct superclass of the class in row i, and 0 otherwise.
The matrix has the information implied by the `contains` slot of the class definitions, but in a form that is often more convenient for further analysis; for example, an adjacency matrix is used in packages and other software to construct graph representations of relationships.
### Usage
```
classesToAM(classes, includeSubclasses = FALSE,
abbreviate = 2)
```
### Arguments
| | |
| --- | --- |
| `classes` | Either a character vector of class names or a list, whose elements can be either class names or class definitions. The list is convenient, for example, to include the package slot for the class name. See the examples. |
| `includeSubclasses` | A logical flag; if `TRUE`, then the matrix will include all the known subclasses of the specified classes as well as the superclasses. The argument can also be a logical vector of the same length as `classes`, to include subclasses for some but not all the classes. |
| `abbreviate` | Control of the abbreviation of the row and/or column labels of the matrix returned: values 0, 1, 2, or 3 abbreviate neither, rows, columns or both. The default, 2, is useful for printing the matrix, since class names tend to be more than one character long, making for spread-out printing. Values of 0 or 3 would be appropriate for making a graph (3 avoids the tendency of some graph plotting software to produce labels in minuscule font size). |
### Details
For each of the classes, the calculation gets all the superclass names from the class definition, and finds the edges in those classes' definitions; that is, all the superclasses at distance 1. The corresponding elements of the adjacency matrix are set to 1.
The adjacency matrices for the individual class definitions are merged. Note two possible kinds of inconsistency, neither of which should cause problems except possibly with identically named classes from different packages. Edges are computed from each superclass definition, so that information overrides a possible inference from extension elements with distance > 1 (and it should). When matrices from successive classes in the argument are merged, the computations do not currently check for inconsistencies—this is the area where possible multiple classes with the same name could cause confusion. A later revision may include consistency checks.
### Value
As described, a matrix with entries 0 or 1, non-zero values indicating that the class corresponding to the column is a direct superclass of the class corresponding to the row. The row and column names are the class names (without package slot).
### See Also
`[extends](is)` and [classRepresentation](classrepresentation-class) for the underlying information from the class definition.
### Examples
```
## the super- and subclasses of "standardGeneric"
## and "derivedDefaultMethod"
am <- classesToAM(list(class(show), class(getMethod(show))), TRUE)
am
## Not run:
## the following function depends on the Bioconductor package Rgraphviz
plotInheritance <- function(classes, subclasses = FALSE, ...) {
if(!require("Rgraphviz", quietly=TRUE))
stop("Only implemented if Rgraphviz is available")
mm <- classesToAM(classes, subclasses)
classes <- rownames(mm); rownames(mm) <- colnames(mm)
graph <- new("graphAM", mm, "directed", ...)
plot(graph)
cat("Key:\n", paste(abbreviate(classes), " = ", classes, ", ",
sep = ""), sep = "", fill = TRUE)
invisible(graph)
}
## The plot of the class inheritance of the package "graph"
require(graph)
plotInheritance(getClasses("package:graph"))
## End(Not run)
```
| programming_docs |
r None
`show` Show an Object
----------------------
### Description
Display the object, by printing, plotting or whatever suits its class. This function exists to be specialized by methods. The default method calls `[showDefault](methodutilities)`.
Formal methods for `show` will usually be invoked for automatic printing (see the details).
### Usage
```
show(object)
```
### Arguments
| | |
| --- | --- |
| `object` | Any R object |
### Details
Objects from an S4 class (a class defined by a call to `[setClass](setclass)`) will be displayed automatically is if by a call to `show`. S4 objects that occur as attributes of S3 objects will also be displayed in this form; conversely, S3 objects encountered as slots in S4 objects will be printed using the S3 convention, as if by a call to `[print](../../base/html/print)`.
Methods defined for `show` will only be inherited by simple inheritance, since otherwise the method would not receive the complete, original object, with misleading results. See the `simpleInheritanceOnly` argument to `[setGeneric](setgeneric)` and the discussion in `[setIs](setis)` for the general concept.
### Value
`show` returns an invisible `NULL`.
### See Also
`[showMethods](showmethods)` prints all the methods for one or more functions.
### Examples
```
## following the example shown in the setMethod documentation ...
setClass("track", slots = c(x="numeric", y="numeric"))
setClass("trackCurve", contains = "track", slots = c(smooth = "numeric"))
t1 <- new("track", x=1:20, y=(1:20)^2)
tc1 <- new("trackCurve", t1)
setMethod("show", "track",
function(object)print(rbind(x = object@x, y=object@y))
)
## The method will now be used for automatic printing of t1
t1
## Not run: [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12]
x 1 2 3 4 5 6 7 8 9 10 11 12
y 1 4 9 16 25 36 49 64 81 100 121 144
[,13] [,14] [,15] [,16] [,17] [,18] [,19] [,20]
x 13 14 15 16 17 18 19 20
y 169 196 225 256 289 324 361 400
## End(Not run)
## and also for tc1, an object of a class that extends "track"
tc1
## Not run: [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12]
x 1 2 3 4 5 6 7 8 9 10 11 12
y 1 4 9 16 25 36 49 64 81 100 121 144
[,13] [,14] [,15] [,16] [,17] [,18] [,19] [,20]
x 13 14 15 16 17 18 19 20
y 169 196 225 256 289 324 361 400
## End(Not run)
```
r None
`setSClass` Create a Class Definition
--------------------------------------
### Description
Constructs an object of class `[classRepresentation](classrepresentation-class)` to describe a particular class. Mostly a utility function, but you can call it to create a class definition without assigning it, as `[setClass](setclass)` would do.
### Usage
```
makeClassRepresentation(name, slots=list(), superClasses=character(),
prototype=NULL, package, validity, access,
version, sealed, virtual=NA, where)
```
### Arguments
| | |
| --- | --- |
| `name` | character string name for the class |
| `slots` | named list of slot classes as would be supplied to `setClass`, but *without* the unnamed arguments for superClasses if any. |
| `superClasses` | what classes does this class extend |
| `prototype` | an object providing the default data for the class, e.g., the result of a call to `[prototype](representation)`. |
| `package` | The character string name for the package in which the class will be stored; see `[getPackageName](getpackagename)`. |
| `validity` | Optional validity method. See `[validObject](validobject)`, and the discussion of validity methods in the reference. |
| `access` | Access information. Not currently used. |
| `version` | Optional version key for version control. Currently generated, but not used. |
| `sealed` | Is the class sealed? See `[setClass](setclass)`. |
| `virtual` | Is this known to be a virtual class? |
| `where` | The environment from which to look for class definitions needed (e.g., for slots or superclasses). See the discussion of this argument under [GenericFunctions](genericfunctions). |
### References
Chambers, John M. (2008) *Software for Data Analysis: Programming with R* Springer. (For the R version.)
Chambers, John M. (1998) *Programming with Data* Springer (For the original S4 version.)
### See Also
`[setClass](setclass)`
r None
`setClassUnion` Classes Defined as the Union of Other Classes
--------------------------------------------------------------
### Description
A class may be defined as the *union* of other classes; that is, as a virtual class defined as a superclass of several other classes. Class unions are useful in method signatures or as slots in other classes, when we want to allow one of several classes to be supplied.
### Usage
```
setClassUnion(name, members, where)
isClassUnion(Class)
```
### Arguments
| | |
| --- | --- |
| `name` | the name for the new union class. |
| `members` | the names of the classes that should be members of this union. |
| `where` | where to save the new class definition. In calls from a package's source code, should be omitted to save the definition in the package's namespace. |
| `Class` | the name or definition of a class. |
### Details
The classes in `members` must be defined before creating the union. However, members can be added later on to an existing union, as shown in the example below. Class unions can be members of other class unions.
Class unions are the only way to create a new superclass of a class whose definition is sealed. The namespace of all packages is sealed when the package is loaded, protecting the class and other definitions from being overwritten from another class or from the global environment. A call to `[setIs](setis)` that tried to define a new superclass for class `"numeric"`, for example, would cause an error.
Class unions are the exception; the class union `"maybeNumber"` in the examples defines itself as a new superclass of `"numeric"`. Technically, it does not alter the metadata object in the other package's namespace and, of course, the effect of the class union depends on loading the package it belongs to. But, basically, class unions are sufficiently useful to justify the exemption.
The different behavior for class unions is made possible because the class definition object for class unions has itself a special class, `"ClassUnionRepresentation"`, an extension of class `[classRepresentation](classrepresentation-class)`.
### References
Chambers, John M. (2016) *Extending R*, Chapman & Hall. (Chapters 9 and 10.)
### Examples
```
## a class for either numeric or logical data
setClassUnion("maybeNumber", c("numeric", "logical"))
## use the union as the data part of another class
setClass("withId", contains = "maybeNumber", slots = c(id = "character"))
w1 <- new("withId", 1:10, id = "test 1")
w2 <- new("withId", sqrt(w1)%%1 < .01, id = "Perfect squares")
## add class "complex" to the union "maybeNumber"
setIs("complex", "maybeNumber")
w3 <- new("withId", complex(real = 1:10, imaginary = sqrt(1:10)))
## a class union containing the existing class union "OptionalFunction"
setClassUnion("maybeCode",
c("expression", "language", "OptionalFunction"))
is(quote(sqrt(1:10)), "maybeCode") ## TRUE
```
r None
`S3Part` S4 Classes that Contain S3 Classes
--------------------------------------------
### Description
A regular (S4) class may contain an S3 class, if that class has been registered (by calling `[setOldClass](setoldclass)`). The functions described here provide information about contained S3 classes. See the section ‘Functions’.
In modern **R**, these functions are not usually needed to program with objects from the S4 class. Standard computations work as expected, including method selection for both S4 and S3. To coerce an object to its contained S3 class, use either of the expressions:
`as(object, S3Class); as(object, "S3")`
where `S3Class` evaluates to the name of the contained class. These return slightly different objects, which in rare cases may need to be distinguished. See the section “Contained S3 Objects”.
### Usage
```
S3Part(object, strictS3 = FALSE, S3Class)
S3Class(object)
isXS3Class(classDef)
slotsFromS3(object)
## the replacement versions of the functions are not recommended
## Create a new object from the class or use the replacement version of as().
S3Part(object, strictS3 = FALSE, needClass = ) <- value
S3Class(object) <- value
```
### Arguments
| | |
| --- | --- |
| `object` | an object from some class that extends a registered S3 class, or a basic vector, matrix or array object type. For most of the functions, an S3 object can also be supplied, with the interpretation that it is its own S3 part. |
| `strictS3` | If `TRUE`, the value returned by `S3Part` will be an S3 object, with all the S4 slots removed. Otherwise, an S4 object will always be returned; for example, from the S4 class created by `[setOldClass](setoldclass)` as a proxy for an S3 class, rather than the underlying S3 object. |
| `S3Class` | the `[character](../../base/html/character)` vector to be stored as the S3 class slot in the object. Usually, and by default, retains the slot from `object`, but an S3 superclass is allowed. |
| `classDef` | a class definition object, as returned by `[getClass](getclass)`. *The remaining arguments apply only to the replacement versions, which are not recommended.* |
| `needClass` | Require that the replacement value be this class or a subclass of it. |
| `value` | For `S3Part<-`, the replacement value for the S3 part of the object. For `S3Class<-`, the character vector that will be used as a proxy for `class(x)` in S3 method dispatch. |
### Functions
`S3Part`: Returns an object from the S3 class that appeared in the `contains=` argument to `[setClass](setclass)`.
If called with `strictS3 = TRUE`, `S3Part()` constructs the underlying S3 object by eliminating all the formally defined slots and turning off the S4 bit of the object. With `strictS3 = FALSE` the object returned is from the corresponding S4 class. For consistency and generality, `S3Part()` works also for classes that extend the basic vector, matrix and array classes.
A call to is equivalent coercing the object to class `"S3"` for the strict case, or to whatever the specific S3 class was, for the non-strict case. The `as()` calls are usually easier for readers to understand.
`S3Class`: Returns the character vector of S3 class(es) stored in the object, if the class has the corresponding `.S3Class` slot. Currently, the function defaults to `[class](../../base/html/class)` otherwise.
`isXS3Class`: Returns `TRUE` or `FALSE` according to whether the class defined by `ClassDef` extends S3 classes (specifically, whether it has the slot for holding the S3 class).
`slotsFromS3`: returns a list of the relevant slot classes, or an empty list for any other object.
The function `slotsFromS3()` is a generic function used internally to access the slots associated with the S3 part of the object. Methods for this function are created automatically when `[setOldClass](setoldclass)` is called with the `S4Class` argument. Usually, there is only one S3 slot, containing the S3 class, but the `S4Class` argument may provide additional slots, in the case that the S3 class has some guaranteed attributes that can be used as formal S4 slots. See the corresponding section in the documentation of `[setOldClass](setoldclass)`.
### Contained S3 Objects
Registering an S3 class defines an S4 class. Objects from this class are essentially identical in content to an object from the S3 class, except for two differences. The value returned by `[class](../../base/html/class)()` will always be a single string for the S4 object, and `[isS4](../../base/html/iss4)()` will return `TRUE` or `FALSE` in the two cases. See the example below. It is barely possible that some S3 code will not work with the S4 object; if so, use `as(x, "S3")`.
Objects from a class that extends an S3 class will have some basic type and possibly some attributes. For an S3 class that has an equivalent S4 definition (e.g., `"data.frame"`), an extending S4 class will have a data part and slots. For other S3 classes (e.g., `"lm"`) an object from the extending S4 class will be some sort of basic type, nearly always a vector type (e.g., `"list"` for `"lm"`), but the data part will not have a formal definition.
Registering an S3 class by a call to `[setOldClass](setoldclass)` creates a class of the same name with a slot `".S3Class"` to hold the corresponding S3 vector of class strings. New S4 classes that extend such classes also have the same slot, set to the S3 class of the contained S3 *object*, which may be an (S3) subclass of the registered class. For example, an S4 class might contain the S3 class `"lm"`, but an object from the class might contain an object from class `"mlm"`, as in the `"xlm"`example below.
**R** is somewhat arbitrary about what it treats as an S3 class: `"ts"` is, but `"matrix"` and `"array"` are not. For classes that extend those, assuming they contain an S3 class is incorrect and will cause some confusion—not usually disastrous, but the better strategy is to stick to the explicit “class”. Thus `as(x, "matrix")` rather than `as(x, "S3")` or `S3Part(x)`.
### S3 and S4 Objects: Conversion Mechanisms
Objects in **R** have an internal bit that indicates whether or not to treat the object as coming from an S4 class. This bit is tested by `[isS4](../../base/html/iss4)` and can be set on or off by `[asS4](../../base/html/iss4)`. The latter function, however, does no checking or interpretation; you should only use it if you are very certain every detail has been handled correctly.
As a friendlier alternative, methods have been defined for coercing to the virtual classes `"S3"` and `"S4"`. The expressions `as(object, "S3")` and `as(object, "S4")` return S3 and S4 objects, respectively. In addition, they attempt to do conversions in a valid way, and also check validity when coercing to S4.
The expression `as(object, "S3")` can be used in two ways. For objects from one of the registered S3 classes, the expression will ensure that the class attribute is the full multi-string S3 class implied by `class(object)`. If the registered class has known attribute/slots, these will also be provided.
Another use of `as(object, "S3")` is to take an S4 object and turn it into an S3 object with corresponding attributes. This is only meaningful with S4 classes that have a data part. If you want to operate on the object without invoking S4 methods, this conversion is usually the safest way.
The expression `as(object, "S4")` will use the attributes in the object to create an object from the S4 definition of `class(object)`. This is a general mechanism to create partially defined version of S4 objects via S3 computations (not much different from invoking `<new>` with corresponding arguments, but usable in this form even if the S4 object has an initialize method with different arguments).
### References
Chambers, John M. (2016) *Extending R*, Chapman & Hall. (Chapters 9 and 10, particularly Section 10.8)
### See Also
`[setOldClass](setoldclass)`
### Examples
```
## an "mlm" object, regressing two variables on two others
sepal <- as.matrix(datasets::iris[,c("Sepal.Width", "Sepal.Length")])
fit <- lm(sepal ~ Petal.Length + Petal.Width + Species, data = datasets::iris)
class(fit) # S3 class: "mlm", "lm"
## a class that contains "mlm"
myReg <- setClass("myReg", slots = c(title = "character"), contains = "mlm")
fit2 <- myReg(fit, title = "Sepal Regression for iris data")
fit2 # shows the inherited "mlm" object and the title
identical(S3Part(fit2), as(fit2, "mlm"))
class(as(fit2, "mlm")) # the S4 class, "mlm"
class(as(fit2, "S3")) # the S3 class, c("mlm", "lm")
## An object may contain an S3 class from a subclass of that declared:
xlm <- setClass("xlm", slots = c(eps = "numeric"), contains = "lm")
xfit <- xlm(fit, eps = .Machine$double.eps)
[email protected] # c("mlm", lm")
```
r None
`setLoadActions` Set Actions For Package Loading
-------------------------------------------------
### Description
These functions provide a mechanism for packages to specify computations to be done during the loading of a package namespace. Such actions are a flexible way to provide information only available at load time (such as locations in a dynamically linked library).
A call to `setLoadAction()` or `setLoadActions()` specifies one or more functions to be called when the corresponding namespace is loaded, with the ... argument names being used as identifying names for the actions.
`getLoadActions` reports the currently defined load actions, given a package's namespace as its argument.
`hasLoadAction` returns `TRUE` if a load action corresponding to the given name has previously been set for the `where` namespace.
`evalOnLoad()` and `evalqOnLoad()` schedule a specific expression for evaluation at load time.
### Usage
```
setLoadAction(action, aname=, where=)
setLoadActions(..., .where=)
getLoadActions(where=)
hasLoadAction(aname, where=)
evalOnLoad(expr, where=, aname=)
evalqOnLoad(expr, where=, aname=)
```
### Arguments
| | |
| --- | --- |
| `action, ...` | functions of one or more arguments, to be called when this package is loaded. The functions will be called with one argument (the package namespace) so all following arguments must have default values. If the elements of ... are named, these names will be used for the corresponding load metadata. |
| `where, .where` | the namespace of the package for which the list of load actions are defined. This argument is normally omitted if the call comes from the source code for the package itself, but will be needed if a package supplies load actions for another package. |
| `aname` | the name for the action. If an action is set without supplying a name, the default uses the position in the sequence of actions specified (`".1"`, etc.). |
| `expr` | an expression to be evaluated in a load action in environment `where`. In the case of `evalqOnLoad()`, the expression is interpreted literally, in that of `evalOnLoad()` it must be precomputed, typically as an object of type `"language"`. |
### Details
The `evalOnLoad()` and `evalqOnLoad()` functions are for convenience. They construct a function to evaluate the expression and call `setLoadAction()` to schedule a call to that function.
Each of the functions supplied as an argument to `setLoadAction()` or `setLoadActions()` is saved as metadata in the namespace, typically that of the package containing the call to `setLoadActions()`. When this package's namespace is loaded, each of these functions will be called. Action functions are called in the order they are supplied to `setLoadActions()`. The objects assigned have metadata names constructed from the names supplied in the call; unnamed arguments are taken to be named by their position in the list of actions (`".1"`, etc.).
Multiple calls to `setLoadAction()` or `setLoadActions()` can be used in a package's code; the actions will be scheduled after any previously specified, except if the name given to `setLoadAction()` is that of an existing action. In typical applications, `setLoadActions()` is more convenient when calling from the package's own code to set several actions. Calls to `setLoadAction()` are more convenient if the action name is to be constructed, which is more typical when one package constructs load actions for another package.
Actions can be revised by assigning with the same name, actual or constructed, in a subsequent call. The replacement must still be a valid function, but can of course do nothing if the intention was to remove a previously specified action.
The functions must have at least one argument. They will be called with one argument, the namespace of the package. The functions will be called at the end of processing of S4 metadata, after dynamically linking any compiled code, the call to `.onLoad()`, if any, and caching method and class definitions, but before the namespace is sealed. (Load actions are only called if methods dispatch is on.)
Functions may therefore assign or modify objects in the namespace supplied as the argument in the call. The mechanism allows packages to save information not available until load time, such as values obtained from a dynamically linked library.
Load actions should be contrasted with user load hooks supplied by `[setHook](../../base/html/userhooks)()`. User hooks are generally provided from outside the package and are run after the namespace has been sealed. Load actions are normally part of the package code, and the list of actions is normally established when the package is installed.
Load actions can be supplied directly in the source code for a package. It is also possible and useful to provide facilities in one package to create load actions in another package. The software needs to be careful to assign the action functions in the correct environment, namely the namespace of the target package.
### Value
`setLoadAction()` and `setLoadActions()` are called for their side effect and return no useful value.
`getLoadActions()` returns a named list of the actions in the supplied namespace.
`hasLoadAction()` returns `TRUE` if the specified action name appears in the actions for this package.
### See Also
`[setHook](../../base/html/userhooks)` for safer (since they are run after the namespace is sealed) and more comprehensive versions in the base package.
### Examples
```
## Not run:
## in the code for some package
## ... somewhere else
setLoadActions(function(ns)
cat("Loaded package", sQuote(getNamespaceName(ns)),
"at", format(Sys.time()), "\n"),
setCount = function(ns) assign("myCount", 1, envir = ns),
function(ns) assign("myPointer", getMyExternalPointer(), envir = ns))
... somewhere later
if(countShouldBe0)
setLoadAction(function(ns) assign("myCount", 0, envir = ns), "setCount")
## End(Not run)
```
| programming_docs |
r None
`promptMethods` Generate a Shell for Documentation of Formal Methods
---------------------------------------------------------------------
### Description
Generates a shell of documentation for the methods of a generic function.
### Usage
```
promptMethods(f, filename = NULL, methods)
```
### Arguments
| | |
| --- | --- |
| `f` | a character string naming the generic function whose methods are to be documented. |
| `filename` | usually, a connection or a character string giving the name of the file to which the documentation shell should be written. The default corresponds to the coded topic name for these methods (currently, `f` followed by `"-methods.Rd"`). Can also be `FALSE` or `NA` (see below). |
| `methods` | optional `"[listOfMethods](findmethods)"` object giving the methods to be documented. By default, the first methods object for this generic is used (for example, if the current global environment has some methods for `f`, these would be documented). If this argument is supplied, it is likely to be `[findMethods](findmethods)(f, where)`, with `where` some package containing methods for `f`. |
### Details
If `filename` is `FALSE`, the text created is returned, presumably to be inserted some other documentation file, such as the documentation of the generic function itself (see `[prompt](../../utils/html/prompt)`).
If `filename` is `NA`, a list-style representation of the documentation shell is created and returned. Writing the shell to a file amounts to `cat(unlist(x), file = filename, sep = "\n")`, where `x` is the list-style representation.
Otherwise, the documentation shell is written to the file specified by `filename`.
### Value
If `filename` is `FALSE`, the text generated; if `filename` is `NA`, a list-style representation of the documentation shell. Otherwise, the name of the file written to is returned invisibly.
### References
Chambers, John M. (2008) *Software for Data Analysis: Programming with R* Springer. (For the R version.)
Chambers, John M. (1998) *Programming with Data* Springer (For the original S4 version.)
### See Also
`[prompt](../../utils/html/prompt)` and `[promptClass](promptclass)`
r None
`RMethodUtils` Method Utilities
--------------------------------
### Description
Utility functions to support the definition and use of formal methods. Most of these functions will not normally be called directly by the user.
### Usage
```
getGeneric(f, mustFind=FALSE, where, package)
getGroup(fdef, recursive, where)
getGroupMembers(group, recursive = FALSE, character = TRUE)
getMethodsMetaData(f, where)
assignMethodsMetaData (f, value, fdef, where)
makeGeneric(f, fdef, fdefault =, group=list(), valueClass=character(),
package =, signature = NULL, genericFunction = NULL,
simpleInheritanceOnly = NULL)
makeStandardGeneric(f, fdef)
generic.skeleton(name, fdef, fdefault)
defaultDumpName(generic, signature)
doPrimitiveMethod(name, def, call= sys.call(sys.parent()),
ev = sys.frame(sys.parent(2)))
conformMethod(signature, mnames, fnames, f= , fdef, method)
matchSignature(signature, fun, where)
findUnique(what, message, where)
MethodAddCoerce(method, argName, thisClass, methodClass)
cacheMetaData(where, attach = TRUE, searchWhere = as.environment(where),
doCheck = TRUE)
cacheGenericsMetaData(f, fdef, attach = TRUE, where, package, methods)
setPrimitiveMethods(f, fdef, code, generic, mlist)
missingArg(symbol, envir = parent.frame(), eval)
sigToEnv(signature, generic)
rematchDefinition(definition, generic, mnames, fnames, signature)
unRematchDefinition(definition)
isRematched(definition)
asMethodDefinition(def, signature, sealed = FALSE, fdef)
addNextMethod(method, f, mlist, optional, envir)
insertClassMethods(methods, Class, value, fieldNames, returnAll)
balanceMethodsList(mlist, args, check = TRUE) # <- deprecated since R 3.2.0
```
### Summary of Functions
`getGeneric`:
returns the definition of the function named `f` as a generic.
If no definition is found, throws an error or returns `NULL` according to the value of `mustFind`. By default, searches in the top-level environment (normally the global environment, but adjusted to work correctly when package code is evaluated from the function `[library](../../base/html/library)`).
Primitive functions are dealt with specially, since there is never a formal generic definition for them. The value returned is the formal definition used for assigning methods to this primitive. Not all primitives can have methods; if this one can't, then `getGeneric` returns `NULL` or throws an error.
`getGroup`:
returns the groups to which this generic belongs, searching from environment `where` (the global environment normally by default).
If `recursive=TRUE`, also all the group(s) of these groups.
`getGroupMembers`:
Return all the members of the group generic function named `group`. If `recursive` is `TRUE`, and some members are group generics, includes their members as well. If `character` is `TRUE`, returns just a character vector of the names; otherwise returns a list, whose elements may (or may not) include either names with a package attribute or actual generic functions.
Note that members that are not defined as generic functions will *not* be included in the returned value. To see the raw data, use `getGeneric(group)@groupMembers`.
`getMethodsMetaData`, `assignMethodsMetaData`, `mlistMetaName`:
Utilities to get (`getMethodsMetaData`) and assign (`assignMethodsMetaData`) the metadata object recording the methods defined in a particular package, or to return the mangled name for that object (`mlistMetaName`).
The assign function should not be used directly. The get function may be useful if you want explicitly only the outcome of the methods assigned in this package. Otherwise, use `[getMethods](findmethods)`.
`matchSignature`:
Matches the signature object (a partially or completely named subset of the signature arguments of the generic function object `fun`), and return a vector of all the classes in the order specified by `fun@signature`. The classes not specified by `signature` will be `"ANY"` in the value, but extra trailing `"ANY"`'s are removed. When the input signature is empty, the returned signature is a single `"ANY"` matching the first formal argument (so the returned value is always non-empty).
Generates an error if any of the supplied signature names are not legal; that is, not in the signature slot of the generic function.
If argument `where` is supplied, a warning will be issued if any of the classes does not have a formal definition visible from `where`.
`MethodAddCoerce`:
Possibly modify one or more methods to explicitly coerce this argument to `methodClass`, the class for which the method is explicitly defined. Only modifies the method if an explicit coerce is required to coerce from `thisClass` to `methodClass`.
`findUnique`:
Return the list of environments (or equivalent) having an object named `what`, using environment `where` and its parent environments. If more than one is found, a warning message is generated, using `message` to identify what was being searched for, unless `message` is the empty string.
`cacheMetaData`, `cacheGenericsMetaData`, `setPrimitiveMethods`:
Utilities for ensuring that the internal information about class and method definitions is up to date. Should normally be called automatically whenever needed (for example, when a method or class definition changes, or when a package is attached or detached). Required primarily because primitive functions are dispatched in C code, rather than by the official model.
The `setPrimitiveMethods` function resets the caching information for a particular primitive function. Don't call it directly.
`missingArg`:
Returns `TRUE` if the symbol supplied is missing *from the call* corresponding to the environment supplied (by default, environment of the call to `missingArg`). If `eval` is true, the argument is evaluated to get the name of the symbol to test. Note that `missingArg` is closer to the ‘Blue Book’ sense of the `[missing](../../base/html/missing)` function, not that of the current R base package implementation. But beware that it works reliably only if no assignment has yet been made to the argument. (For method dispatch this is fine, because computations are done at the beginning of the call.)
`balanceMethodsList`:
Used to be called from `setMethod()` and is *deprecated* since **R** version 3.2.0.
`sigToEnv`:
Turn the signature (a named vector of classes) into an environment with the classes assigned to the names. The environment is then suitable for calling `[MethodsListSelect](methodslist)`, with `evalArgs=FALSE`, to select a method corresponding to the signature. Usually not called directly: see `[selectMethod](getmethod)`.
`.saveImage`:
Flag, used in dynamically initializing the methods package from `.onLoad`.
`rematchDefinition`, `unRematchDefinition`, `isRematched`:
If the specified method in a call to `[setMethod](setmethod)` specializes the argument list (by replacing ...), then `rematchDefinition` constructs the actual method stored. Using knowledge of how `rematchDefinition` works, `unRematchDefinition` reverses the procedure; if given a function or method definition that does not correspond to this form, it just returns its argument. `isRematched` returns a logical value indicating whether rematching was used when constructing a given method.
`asMethodDefinition`:
Turn a function definition into an object of class `[MethodDefinition](methoddefinition-class)`, corresponding to the given `signature` (by default generates a default method with empty signature). The definition is sealed according to the `sealed` argument.
`addNextMethod`:
A generic function that finds the next method for the signature of the method definition `method` and caches that method in the method definition (promoting the class to `"MethodWithNext"`). Note that argument `mlist` is obsolete and not used.
`makeGeneric`:
Makes a generic function object corresponding to the given function name, optional definition and optional default method. Other arguments supply optional elements for the slots of class `[genericFunction](genericfunction-class)`.
`makeStandardGeneric`:
a utility function that makes a valid function calling `standardGeneric` for name `f`. Works (more or less) even if the actual definition, `fdef`, is not a proper function, that is, it's a primitive or internal.
`conformMethod`:
If the formal arguments, `mnames`, are not identical to the formal arguments to the function, `fnames`, `conformMethod` determines whether the signature and the two sets of arguments conform, and returns the signature, possibly extended. The function name, `f` is supplied for error messages. The generic function, `fdef`, supplies the generic signature for matching purposes.
The method assignment conforms if method and generic function have identical formal argument lists. It can also conform if the method omits some of the formal arguments of the function but: (1) the non-omitted arguments are a subset of the function arguments, appearing in the same order; (2) there are no arguments to the method that are not arguments to the function; and (3) the omitted formal arguments do not appear as explicit classes in the signature. A future extension hopes to test also that the omitted arguments are not assumed by being used as locally assigned names or function names in the body of the method.
`defaultDumpName`:
the default name to be used for dumping a method.
`doPrimitiveMethod`:
do a primitive call to builtin function `name` the definition and call provided, and carried out in the environment `ev`.
A call to `doPrimitiveMethod` is used when the actual method is a .Primitive. (Because primitives don't behave correctly as ordinary functions, not having either formal arguments nor a function body).
### See Also
`[setGeneric](setgeneric)`, `[setClass](setclass)`, `[showMethods](showmethods)`.
### Examples
```
getGroup("exp")
getGroup("==", recursive = TRUE)
getGroupMembers("Arith")
getGroupMembers("Math")
getGroupMembers("Ops") # -> its sub groups
```
r None
`MethodWithNext-class` Class MethodWithNext
--------------------------------------------
### Description
Class of method definitions set up for callNextMethod
### Objects from the Class
Objects from this class are generated as a side-effect of calls to `[callNextMethod](nextmethod)`.
### Slots
`.Data`:
Object of class `"function"`; the actual function definition.
`nextMethod`:
Object of class `"PossibleMethod"` the method to use in response to a `[callNextMethod](nextmethod)()` call.
`excluded`:
Object of class `"list"`; one or more signatures excluded in finding the next method.
`target`:
Object of class `"signature"`, from class `"MethodDefinition"`
`defined`:
Object of class `"signature"`, from class `"MethodDefinition"`
`generic`:
Object of class `"character"`; the function for which the method was created.
### Extends
Class `"MethodDefinition"`, directly.
Class `"function"`, from data part.
Class `"PossibleMethod"`, by class `"MethodDefinition"`.
Class `"OptionalMethods"`, by class `"MethodDefinition"`.
### Methods
findNextMethod
`signature(method = "MethodWithNext")`: used internally by method dispatch.
loadMethod
`signature(method = "MethodWithNext")`: used internally by method dispatch.
show
`signature(object = "MethodWithNext")`
### See Also
`[callNextMethod](nextmethod)`, and class `[MethodDefinition](methoddefinition-class)`.
r None
`as` Force an Object to Belong to a Class
------------------------------------------
### Description
Coerce an object to a given class.
### Usage
```
as(object, Class, strict=TRUE, ext)
as(object, Class) <- value
```
### Arguments
| | |
| --- | --- |
| `object` | any **R** object. |
| `Class` | the name of the class to which `object` should be coerced. |
| `strict` | logical flag. If `TRUE`, the returned object must be strictly from the target class (unless that class is a virtual class, in which case the object will be from the closest actual class, in particular the original object, if that class extends the virtual class directly). If `strict = FALSE`, any simple extension of the target class will be returned, without further change. A simple extension is, roughly, one that just adds slots to an existing class. |
| `value` | The value to use to modify `object` (see the discussion below). You should supply an object with class `Class`; some coercion is done, but you're unwise to rely on it. |
| `ext` | an optional object defining how `Class` is extended by the class of the object (as returned by `[possibleExtends](rclassutils)`). This argument is used internally; do not use it directly. |
### Description
`as(object)` returns the version of this object coerced to be the given `Class`. When used in the replacement form on the left of an assignment, the portion of the object corresponding to `Class` is replaced by `value`.
The operation of `as()` in either form depends on the definition of coerce methods. Methods are defined automatically when the two classes are related by inheritance; that is, when one of the classes is a subclass of the other.
Coerce methods are also predefined for basic classes (including all the types of vectors, functions and a few others).
Beyond these two sources of methods, further methods are defined by calls to the `[setAs](setas)` function. See that documentation also for details of how coerce methods work. Use `showMethods(coerce)` for a list of all currently defined methods, as in the example below.
### Basic Coercion Methods
Methods are pre-defined for coercing any object to one of the basic datatypes. For example, `as(x, "numeric")` uses the existing `as.numeric` function. These and all other existing methods can be listed as shown in the example.
### References
Chambers, John M. (2016) *Extending R*, Chapman & Hall. (Chapters 9 and 10.)
### See Also
If you think of using `try(as(x, cl))`, consider `[canCoerce](cancoerce)(x, cl)` instead.
### Examples
```
## Show all the existing methods for as()
showMethods("coerce")
```
r None
`selectSuperClasses` Super Classes (of Specific Kinds) of a Class
------------------------------------------------------------------
### Description
Return superclasses of `ClassDef`, possibly only non-virtual or direct or simple ones.
These functions are designed to be fast, and consequently only work with the `contains` slot of the corresponding class definitions.
### Usage
```
selectSuperClasses(Class, dropVirtual = FALSE, namesOnly = TRUE,
directOnly = TRUE, simpleOnly = directOnly,
where = topenv(parent.frame()))
.selectSuperClasses(ext, dropVirtual = FALSE, namesOnly = TRUE,
directOnly = TRUE, simpleOnly = directOnly)
```
### Arguments
| | |
| --- | --- |
| `Class` | name of the class or (more efficiently) the class definition object (see `[getClass](getclass)`). |
| `dropVirtual` | logical indicating if only non-virtual superclasses should be returned. |
| `namesOnly` | logical indicating if only a vector names instead of a named list class-extensions should be returned. |
| `directOnly` | logical indicating if only a *direct* super classes should be returned. |
| `simpleOnly` | logical indicating if only simple class extensions should be returned. |
| `where` | (only used when `Class` is not a class definition) environment where the class definition of `Class` is found. |
| `ext` | for `.selectSuperClasses()` only, a `[list](../../base/html/list)` of class extensions, typically `[getClassDef](getclass)(..)@contains`. |
### Value
a `[character](../../base/html/character)` vector (if `namesOnly` is true, as per default) or a list of class extensions (as the `contains` slot in the result of `[getClass](getclass)`).
### Note
The typical user level function is `selectSuperClasses()` which calls `.selectSuperClasses()`; i.e., the latter should only be used for efficiency reasons by experienced useRs.
### See Also
`<is>`, `[getClass](getclass)`; further, the more technical class `[classRepresentation](classrepresentation-class)` documentation.
### Examples
```
setClass("Root")
setClass("Base", contains = "Root", slots = c(length = "integer"))
setClass("A", contains = "Base", slots = c(x = "numeric"))
setClass("B", contains = "Base", slots = c(y = "character"))
setClass("C", contains = c("A", "B"))
extends("C") #--> "C" "A" "B" "Base" "Root"
selectSuperClasses("C") # "A" "B"
selectSuperClasses("C", directOnly=FALSE) # "A" "B" "Base" "Root"
selectSuperClasses("C", dropVirtual=TRUE, directOnly=FALSE)# ditto w/o "Root"
```
r None
`EnvironmentClass` Class "environment"
---------------------------------------
### Description
A formal class for R environments.
### Objects from the Class
Objects can be created by calls of the form `new("environment", ...)`. The arguments in ..., if any, should be named and will be assigned to the newly created environment.
### Methods
coerce
`signature(from = "ANY", to = "environment")`: calls `[as.environment](../../base/html/as.environment)`.
initialize
`signature(object = "environment")`: Implements the assignments in the new environment. Note that the `object` argument is ignored; a new environment is *always* created, since environments are not protected by copying.
### See Also
`[new.env](../../base/html/environment)`
r None
`inheritedSlotNames` Names of Slots Inherited From a Super Class
-----------------------------------------------------------------
### Description
For a class (or class definition, see `[getClass](getclass)` and the description of class `[classRepresentation](classrepresentation-class)`), give the names which are inherited from “above”, i.e., super classes, rather than by this class' definition itself.
### Usage
```
inheritedSlotNames(Class, where = topenv(parent.frame()))
```
### Arguments
| | |
| --- | --- |
| `Class` | character string or `[classRepresentation](classrepresentation-class)`, i.e., resulting from `[getClass](getclass)`. |
| `where` | environment, to be passed further to `[isClass](findclass)` and `[getClass](getclass)`. |
### Value
character vector of slot names, or `[NULL](../../base/html/null)`.
### See Also
`[slotNames](slot)`, `<slot>`, `[setClass](setclass)`, etc.
### Examples
```
.srch <- search()
library(stats4)
inheritedSlotNames("mle")
if(require("Matrix")) withAutoprint({
inheritedSlotNames("Matrix") # NULL
## whereas
inheritedSlotNames("sparseMatrix") # --> Dim & Dimnames
## i.e. inherited from "Matrix" class
cl <- getClass("dgCMatrix") # six slots, etc
inheritedSlotNames(cl) # *all* six slots are inherited
})
## Not run:
## detach package we've attached above:
for(n in rev(which(is.na(match(search(), .srch)))))
try( detach(pos = n) )
## End(Not run)
```
| programming_docs |
r None
`MethodSupport` Additional (Support) Functions for Methods
-----------------------------------------------------------
### Description
These are *internal* support routines for computations on formal methods.
### Usage
```
listFromMethods(generic, where, table)
getMethodsForDispatch(fdef, inherited = FALSE)
cacheMethod(f, sig, def, args, fdef, inherited = FALSE)
resetGeneric(f, fdef, mlist, where, deflt)
```
### Summary of Functions
`listFromMethods`:
A list object describing the methods for the function `generic`, supplied either as the function or the name of the function. For user code, the function `[findMethods](findmethods)` or `[findMethodSignatures](findmethods)` is recommended instead, returning a simple list of methods or a character matrix of the signatures.
If `where` is supplied, this should be an environment or search list position from which a table of methods for the generic will be taken. If `table` is supplied, this is itself assumed to be such a table. If neither argument is supplied, the table is taken directly from the generic function (that is, the current set of methods defined for this generic).
Returns an object of class `"LinearMethodsList"` (see [LinearMethodsList](linearmethodslist-class)) describing all the methods in the relevant table.
`resetGeneric`:
reset the currently defined methods for the generic function named `f`, found in environment `where` or explicitly supplied as an argument. Other arguments are obsolete and ignored.
Called for its side effect of resetting all inherited methods in the generic function's internal table. Normally not called directly, since changes to methods and the loading and detaching of packages all generate a call automatically.
`cacheMethod`:
Store the definition for this function and signature in the method metadata for the function. Used to store extensions of coerce methods found through inheritance, and to cache methods with `[callNextMethod](nextmethod)` information.
No persistent effect, since the method metadata is session-scope only.
`getMethodsForDispatch`:
Get the table of methods (an `[environment](../../base/html/environment)` since R version 2.6.0) representing the methods for function `f`.
For user code, the function `[findMethods](findmethods)` or `[findMethodSignatures](findmethods)` is recommended instead, returning a simple list of methods or a character matrix of the signatures.
r None
`Methods_for_S3` Methods For S3 and S4 Dispatch
------------------------------------------------
### Description
The S3 and S4 software in **R** are two generations implementing functional object-oriented programming. S3 is the original, simpler for initial programming but less general, less formal and less open to validation. The S4 formal methods and classes provide these features but require more programming.
In modern **R**, the two versions attempt to work together. This documentation outlines how to write methods for both systems by defining an S4 method for a function that dispatches S3 methods.
The systems can also be combined by using an S3 class with S4 method dispatch or in S4 class definitions. See `[setOldClass](setoldclass)`.
### S3 Method Dispatch
The **R** evaluator will ‘dispatch’ a method from a function call either when the body of the function calls the special primitive `[UseMethod](../../base/html/usemethod)` or when the call is to one of the builtin primitives such as the `math` functions or the binary operators.
S3 method dispatch looks at the class of the first argument or the class of either argument in a call to one of the primitive binary operators. In pure S3 situations, ‘class’ in this context means the class attribute or the implied class for a basic data type such as `"numeric"`. The first S3 method that matches a name in the class is called and the value of that call is the value of the original function call. For details, see [S3Methods](../../base/html/usemethod).
In modern **R**, a function `meth` in a package is registered as an S3 method for function `fun` and class `Class` by including in the package's `NAMESPACE` file the directive
`S3method(fun, Class, meth)`
By default (and traditionally), the third argument is taken to be the function `fun.Class`; that is, the name of the generic function, followed by `"."`, followed by the name of the class.
As with S4 methods, a method that has been registered will be added to a table of methods for this function when the corresponding package is loaded into the session. Older versions of **R**, copying the mechanism in S, looked for the method in the current search list, but packages should now always register S3 methods rather than requiring the package to be attached.
### Methods for S4 Classes
Two possible mechanisms for implementing a method corresponding to an S4 class, there are two possibilities are to register it as an S3 method with the S4 class name or to define and set an S4 method, which will have the side effect of creating an S4 generic version of this function.
For most situations either works, but the recommended approach is to do both: register the S3 method and supply the identical function as the definition of the S4 method. This ensures that the proposed method will be dispatched for any applicable call to the function.
As an example, suppose an S4 class `"uncased"` is defined, extending `"character"` and intending to ignore upper- and lower-case. The base function `[unique](../../base/html/unique)` dispatches S3 methods. To define the class and a method for this function:
`setClass("uncased", contains = "character")`
`unique.uncased <- function(x, incomparables = FALSE, ...)
nextMethod(tolower(x))`
`setMethod("unique", "uncased", unique.uncased)`
In addition, the `NAMESPACE` for the package should contain:
`S3method(unique, uncased)`
`exportMethods(unique)`
The result is to define identical S3 and S4 methods and ensure that all calls to `unique` will dispatch that method when appropriate.
### Details
The reasons for defining both S3 and S4 methods are as follows:
1. An S4 method alone will not be seen if the S3 generic function is called directly. This will be the case, for example, if some function calls `unique()` from a package that does not make that function an S4 generic.
However, primitive functions and operators are exceptions: The internal C code will look for S4 methods if and only if the object is an S4 object. S4 method dispatch would be used to dispatch any binary operator calls where either of the operands was an S4 object, for example.
2. An S3 method alone will not be called if there is *any* eligible non-default S4 method.
So if a package defined an S3 method for `unique` for an S4 class but another package defined an S4 method for a superclass of that class, the superclass method would be chosen, probably not what was intended.
S4 and S3 method selection are designed to follow compatible rules of inheritance, as far as possible. S3 classes can be used for any S4 method selection, provided that the S3 classes have been registered by a call to `[setOldClass](setoldclass)`, with that call specifying the correct S3 inheritance pattern. S4 classes can be used for any S3 method selection; when an S4 object is detected, S3 method selection uses the contents of `[extends](is)(class(x))` as the equivalent of the S3 inheritance (the inheritance is cached after the first call).
For the details of S4 and S3 dispatch see [Methods\_Details](methods_details) and [S3Methods](../../base/html/usemethod).
### References
Chambers, John M. (2016) *Extending R*, Chapman & Hall. (Chapters 9 and 10.)
r None
`dotsMethods` The Use of ... in Method Signatures
--------------------------------------------------
### Description
The “...” argument in **R** functions is treated specially, in that it matches zero, one or more actual arguments (and so, objects). A mechanism has been added to **R** to allow “...” as the signature of a generic function. Methods defined for such functions will be selected and called when *all* the arguments matching “...” are from the specified class or from some subclass of that class.
### Using "..." in a Signature
Beginning with version 2.8.0 of **R**, S4 methods can be dispatched (selected and called) corresponding to the special argument “...”. Currently, “...” cannot be mixed with other formal arguments: either the signature of the generic function is “...” only, or it does not contain “...”. (This restriction may be lifted in a future version.)
Given a suitable generic function, methods are specified in the usual way by a call to `[setMethod](setmethod)`. The method definition must be written expecting all the arguments corresponding to “...” to be from the class specified in the method's signature, or from a class that extends that class (i.e., a subclass of that class).
Typically the methods will pass “...” down to another function or will create a list of the arguments and iterate over that. See the examples below.
When you have a computation that is suitable for more than one existing class, a convenient approach may be to define a union of these classes by a call to `[setClassUnion](setclassunion)`. See the example below.
### Method Selection and Dispatch for "..."
See [Methods\_Details](methods_details) for a general discussion. The following assumes you have read the “Method Selection and Dispatch” section of that documentation.
A method selecting on “...” is specified by a single class in the call to `[setMethod](setmethod)`. If all the actual arguments corresponding to “...” have this class, the corresponding method is selected directly.
Otherwise, the class of each argument and that class' superclasses are computed, beginning with the first “...” argument. For the first argument, eligible methods are those for any of the classes. For each succeeding argument that introduces a class not considered previously, the eligible methods are further restricted to those matching the argument's class or superclasses. If no further eligible classes exist, the iteration breaks out and the default method, if any, is selected.
At the end of the iteration, one or more methods may be eligible. If more than one, the selection looks for the method with the least distance to the actual arguments. For each argument, any inherited method corresponds to a distance, available from the `contains` slot of the class definition. Since the same class can arise for more than one argument, there may be several distances associated with it. Combining them is inevitably arbitrary: the current computation uses the minimum distance. Thus, for example, if a method matched one argument directly, one as first generation superclass and another as a second generation superclass, the distances are 0, 1 and 2. The current selection computation would use distance 0 for this method. In particular, this selection criterion tends to use a method that matches exactly one or more of the arguments' class.
As with ordinary method selection, there may be multiple methods with the same distance. A warning message is issued and one of the methods is chosen (the first encountered, which in this case is rather arbitrary).
Notice that, while the computation examines all arguments, the essential cost of dispatch goes up with the number of *distinct* classes among the arguments, likely to be much smaller than the number of arguments when the latter is large.
### Implementation Details
Methods dispatching on “...” were introduced in version 2.8.0 of **R**. The initial implementation of the corresponding selection and dispatch is in an R function, for flexibility while the new mechanism is being studied. In this implementation, a local version of `standardGeneric` is inserted in the generic function's environment. The local version selects a method according to the criteria above and calls that method, from the environment of the generic function. This is slightly different from the action taken by the C implementation when “...” is not involved. Aside from the extra computing time required, the method is evaluated in a true function call, as opposed to the special context constructed by the C version (which cannot be exactly replicated in R code.) However, situations in which different computational results would be obtained have not been encountered so far, and seem very unlikely.
Methods dispatching on arguments other than “...” are *cached* by storing the inherited method in the table of all methods, where it will be found on the next selection with the same combination of classes in the actual arguments (but not used for inheritance searches). Methods based on “...” are also cached, but not found quite as immediately. As noted, the selected method depends only on the set of classes that occur in the “...” arguments. Each of these classes can appear one or more times, so many combinations of actual argument classes will give rise to the same effective signature. The selection computation first computes and sorts the distinct classes encountered. This gives a label that will be cached in the table of all methods, avoiding any further search for inherited classes after the first occurrence. A call to `[showMethods](showmethods)` will expose such inherited methods.
The intention is that the “...” features will be added to the standard C code when enough experience with them has been obtained. It is possible that at the same time, combinations of “...” with other arguments in signatures may be supported.
### References
Chambers, John M. (2008) *Software for Data Analysis: Programming with R* Springer. (For the R version.)
Chambers, John M. (1998) *Programming with Data* Springer (For the original S4 version.)
### See Also
For the general discussion of methods, see [Methods\_Details](methods_details) and links from there.
### Examples
```
cc <- function(...)c(...)
setGeneric("cc")
setMethod("cc", "character", function(...)paste(...))
setClassUnion("Number", c("numeric", "complex"))
setMethod("cc", "Number", function(...) sum(...))
setClass("cdate", contains = "character", slots = c(date = "Date"))
setClass("vdate", contains = "vector", slots = c(date = "Date"))
cd1 <- new("cdate", "abcdef", date = Sys.Date())
cd2 <- new("vdate", "abcdef", date = Sys.Date())
stopifnot(identical(cc(letters, character(), cd1),
paste(letters, character(), cd1))) # the "character" method
stopifnot(identical(cc(letters, character(), cd2),
c(letters, character(), cd2)))
# the default, because "vdate" doesn't extend "character"
stopifnot(identical(cc(1:10, 1+1i), sum(1:10, 1+1i))) # the "Number" method
stopifnot(identical(cc(1:10, 1+1i, TRUE), c(1:10, 1+1i, TRUE))) # the default
stopifnot(identical(cc(), c())) # no arguments implies the default method
setGeneric("numMax", function(...)standardGeneric("numMax"))
setMethod("numMax", "numeric", function(...)max(...))
# won't work for complex data
setMethod("numMax", "Number", function(...) paste(...))
# should not be selected w/o complex args
stopifnot(identical(numMax(1:10, pi, 1+1i), paste(1:10, pi, 1+1i)))
stopifnot(identical(numMax(1:10, pi, 1), max(1:10, pi, 1)))
try(numMax(1:10, pi, TRUE)) # should be an error: no default method
## A generic version of paste(), dispatching on the "..." argument:
setGeneric("paste", signature = "...")
setMethod("paste", "Number", function(..., sep, collapse) c(...))
stopifnot(identical(paste(1:10, pi, 1), c(1:10, pi, 1)))
```
r None
`fixPrevious` Fix Objects Saved from R Versions Previous to 1.8
----------------------------------------------------------------
### Description
Beginning with R version 1.8.0, the class of an object contains the identification of the package in which the class is defined. The function `fixPre1.8` fixes and re-assigns objects missing that information (typically because they were loaded from a file saved with a previous version of R.)
### Usage
```
fixPre1.8(names, where)
```
### Arguments
| | |
| --- | --- |
| `names` | Character vector of the names of all the objects to be fixed and re-assigned. |
| `where` | The environment from which to look for the objects, and for class definitions. Defaults to the top environment of the call to `fixPre1.8`, the global environment if the function is used interactively. |
### Details
The named object will be saved where it was found. Its class attribute will be changed to the full form required by R 1.8; otherwise, the contents of the object should be unchanged.
Objects will be fixed and re-assigned only if all the following conditions hold:
1. The named object exists.
2. It is from a defined class (not a basic datatype which has no actual class attribute).
3. The object appears to be from an earlier version of R.
4. The class is currently defined.
5. The object is consistent with the current class definition.
If any condition except the second fails, a warning message is generated.
Note that `fixPre1.8` currently fixes *only* the change in class attributes. In particular, it will not fix binary versions of packages installed with earlier versions of R if these use incompatible features. Such packages must be re-installed from source, which is the wise approach always when major version changes occur in R.
### Value
The names of all the objects that were in fact re-assigned.
r None
`cbind2` Combine two Objects by Columns or Rows
------------------------------------------------
### Description
Combine two matrix-like **R** objects by columns (`cbind2`) or rows (`rbind2`). These are (S4) generic functions with default methods.
### Usage
```
cbind2(x, y, ...)
rbind2(x, y, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | any **R** object, typically matrix-like. |
| `y` | any **R** object, typically similar to `x`, or missing completely. |
| `...` | optional arguments for methods. |
### Details
The main use of `cbind2` (`rbind2`) is to be called recursively by `[cbind](../../base/html/cbind)()` (`rbind()`) when both of these requirements are met:
* There is at least one argument that is an S4 object, and
* S3 dispatch fails (see the Dispatch section under [cbind](../../base/html/cbind)).
The methods on `cbind2` and `rbind2` effectively define the type promotion policy when combining a heterogeneous set of arguments. The homogeneous case, where all objects derive from some S4 class, can be handled via S4 dispatch on the `...` argument via an externally defined S4 `cbind` (`rbind`) generic.
Since (for legacy reasons) S3 dispatch is attempted first, it is generally a good idea to additionally define an S3 method on `cbind` (`rbind`) for the S4 class. The S3 method will be invoked when the arguments include objects of the S4 class, along with arguments of classes for which no S3 method exists. Also, in case there is an argument that selects a different S3 method (like the one for `data.frame`), this S3 method serves to introduce an ambiguity in dispatch that triggers the recursive fallback to `cbind2` (`rbind2`). Otherwise, the other S3 method would be called, which may not be appropriate.
### Value
A matrix (or matrix like object) combining the columns (or rows) of `x` and `y`. Note that methods must construct `[colnames](../../base/html/colnames)` and `[rownames](../../base/html/colnames)` from the corresponding column and row names of `x` and `y` (but not from deparsing argument names such as in `[cbind](../../base/html/cbind)(...,
deparse.level = d)` for *d >= 1*).
### Methods
`signature(x = "ANY", y = "ANY")`
the default method using **R**'s internal code.
`signature(x = "ANY", y = "missing")`
the default method for one argument using **R**'s internal code.
### See Also
`[cbind](../../base/html/cbind)`, `[rbind](../../base/html/cbind)`; further, `[cBind](../../matrix/html/cbind)`, `[rBind](../../matrix/html/cbind)` in the [Matrix](https://CRAN.R-project.org/package=Matrix) package.
### Examples
```
cbind2(1:3, 4)
m <- matrix(3:8, 2,3, dimnames=list(c("a","b"), LETTERS[1:3]))
cbind2(1:2, m) # keeps dimnames from m
## rbind() and cbind() now make use of rbind2()/cbind2() methods
setClass("Num", contains="numeric")
setMethod("cbind2", c("Num", "missing"),
function(x,y, ...) { cat("Num-miss--meth\n"); as.matrix(x)})
setMethod("cbind2", c("Num","ANY"), function(x,y, ...) {
cat("Num-A.--method\n") ; cbind(getDataPart(x), y, ...) })
setMethod("cbind2", c("ANY","Num"), function(x,y, ...) {
cat("A.-Num--method\n") ; cbind(x, getDataPart(y), ...) })
a <- new("Num", 1:3)
trace("cbind2")
cbind(a)
cbind(a, four=4, 7:9)# calling cbind2() twice
cbind(m,a, ch=c("D","E"), a*3)
cbind(1,a, m) # ok with a warning
untrace("cbind2")
```
| programming_docs |
Subsets and Splits