desc
stringlengths
3
26.7k
decl
stringlengths
11
7.89k
bodies
stringlengths
8
553k
'Compute decision function of ``X`` for each iteration. This method allows monitoring (i.e. determine error on testing set) after each stage. Parameters X : array-like or sparse matrix, shape = [n_samples, n_features] The input samples. Internally, it will be converted to ``dtype=np.float32`` and if a sparse matrix is provided to a sparse ``csr_matrix``. Returns score : generator of array, shape = [n_samples, k] The decision function of the input samples. The order of the classes corresponds to that in the attribute `classes_`. Regression and binary classification are special cases with ``k == 1``, otherwise ``k==n_classes``.'
def staged_decision_function(self, X):
for dec in self._staged_decision_function(X): (yield dec)
'Predict class for X. Parameters X : array-like or sparse matrix, shape = [n_samples, n_features] The input samples. Internally, it will be converted to ``dtype=np.float32`` and if a sparse matrix is provided to a sparse ``csr_matrix``. Returns y : array of shape = [n_samples] The predicted values.'
def predict(self, X):
score = self.decision_function(X) decisions = self.loss_._score_to_decision(score) return self.classes_.take(decisions, axis=0)
'Predict class at each stage for X. This method allows monitoring (i.e. determine error on testing set) after each stage. Parameters X : array-like or sparse matrix, shape = [n_samples, n_features] The input samples. Internally, it will be converted to ``dtype=np.float32`` and if a sparse matrix is provided to a sparse ``csr_matrix``. Returns y : generator of array of shape = [n_samples] The predicted value of the input samples.'
def staged_predict(self, X):
for score in self._staged_decision_function(X): decisions = self.loss_._score_to_decision(score) (yield self.classes_.take(decisions, axis=0))
'Predict class probabilities for X. Parameters X : array-like or sparse matrix, shape = [n_samples, n_features] The input samples. Internally, it will be converted to ``dtype=np.float32`` and if a sparse matrix is provided to a sparse ``csr_matrix``. Raises AttributeError If the ``loss`` does not support probabilities. Returns p : array of shape = [n_samples] The class probabilities of the input samples. The order of the classes corresponds to that in the attribute `classes_`.'
def predict_proba(self, X):
score = self.decision_function(X) try: return self.loss_._score_to_proba(score) except NotFittedError: raise except AttributeError: raise AttributeError(('loss=%r does not support predict_proba' % self.loss))
'Predict class log-probabilities for X. Parameters X : array-like or sparse matrix, shape = [n_samples, n_features] The input samples. Internally, it will be converted to ``dtype=np.float32`` and if a sparse matrix is provided to a sparse ``csr_matrix``. Raises AttributeError If the ``loss`` does not support probabilities. Returns p : array of shape = [n_samples] The class log-probabilities of the input samples. The order of the classes corresponds to that in the attribute `classes_`.'
def predict_log_proba(self, X):
proba = self.predict_proba(X) return np.log(proba)
'Predict class probabilities at each stage for X. This method allows monitoring (i.e. determine error on testing set) after each stage. Parameters X : array-like or sparse matrix, shape = [n_samples, n_features] The input samples. Internally, it will be converted to ``dtype=np.float32`` and if a sparse matrix is provided to a sparse ``csr_matrix``. Returns y : generator of array of shape = [n_samples] The predicted value of the input samples.'
def staged_predict_proba(self, X):
try: for score in self._staged_decision_function(X): (yield self.loss_._score_to_proba(score)) except NotFittedError: raise except AttributeError: raise AttributeError(('loss=%r does not support predict_proba' % self.loss))
'Predict regression target for X. Parameters X : array-like or sparse matrix, shape = [n_samples, n_features] The input samples. Internally, it will be converted to ``dtype=np.float32`` and if a sparse matrix is provided to a sparse ``csr_matrix``. Returns y : array of shape = [n_samples] The predicted values.'
def predict(self, X):
X = check_array(X, dtype=DTYPE, order='C', accept_sparse='csr') return self._decision_function(X).ravel()
'Predict regression target at each stage for X. This method allows monitoring (i.e. determine error on testing set) after each stage. Parameters X : array-like or sparse matrix, shape = [n_samples, n_features] The input samples. Internally, it will be converted to ``dtype=np.float32`` and if a sparse matrix is provided to a sparse ``csr_matrix``. Returns y : generator of array of shape = [n_samples] The predicted value of the input samples.'
def staged_predict(self, X):
for y in self._staged_decision_function(X): (yield y.ravel())
'Apply trees in the ensemble to X, return leaf indices. .. versionadded:: 0.17 Parameters X : array-like or sparse matrix, shape = [n_samples, n_features] The input samples. Internally, its dtype will be converted to ``dtype=np.float32``. If a sparse matrix is provided, it will be converted to a sparse ``csr_matrix``. Returns X_leaves : array_like, shape = [n_samples, n_estimators] For each datapoint x in X and for each tree in the ensemble, return the index of the leaf x ends up in each estimator.'
def apply(self, X):
leaves = super(GradientBoostingRegressor, self).apply(X) leaves = leaves.reshape(X.shape[0], self.estimators_.shape[0]) return leaves
'Fit the estimators. Parameters X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. y : array-like, shape = [n_samples] Target values. sample_weight : array-like, shape = [n_samples] or None Sample weights. If None, then samples are equally weighted. Note that this is supported only if all underlying estimators support sample weights. Returns self : object'
def fit(self, X, y, sample_weight=None):
if (isinstance(y, np.ndarray) and (len(y.shape) > 1) and (y.shape[1] > 1)): raise NotImplementedError('Multilabel and multi-output classification is not supported.') if (self.voting not in ('soft', 'hard')): raise ValueError(("Voting must be 'soft' or 'hard'; got (voting=%r)" % self.voting)) if ((self.estimators is None) or (len(self.estimators) == 0)): raise AttributeError('Invalid `estimators` attribute, `estimators` should be a list of (string, estimator) tuples') if ((self.weights is not None) and (len(self.weights) != len(self.estimators))): raise ValueError(('Number of classifiers and weights must be equal; got %d weights, %d estimators' % (len(self.weights), len(self.estimators)))) if (sample_weight is not None): for (name, step) in self.estimators: if (not has_fit_parameter(step, 'sample_weight')): raise ValueError(("Underlying estimator '%s' does not support sample weights." % name)) (names, clfs) = zip(*self.estimators) self._validate_names(names) n_isnone = np.sum([(clf is None) for (_, clf) in self.estimators]) if (n_isnone == len(self.estimators)): raise ValueError('All estimators are None. At least one is required to be a classifier!') self.le_ = LabelEncoder().fit(y) self.classes_ = self.le_.classes_ self.estimators_ = [] transformed_y = self.le_.transform(y) self.estimators_ = Parallel(n_jobs=self.n_jobs)((delayed(_parallel_fit_estimator)(clone(clf), X, transformed_y, sample_weight=sample_weight) for clf in clfs if (clf is not None))) return self
'Get the weights of not `None` estimators'
@property def _weights_not_none(self):
if (self.weights is None): return None return [w for (est, w) in zip(self.estimators, self.weights) if (est[1] is not None)]
'Predict class labels for X. Parameters X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns maj : array-like, shape = [n_samples] Predicted class labels.'
def predict(self, X):
check_is_fitted(self, 'estimators_') if (self.voting == 'soft'): maj = np.argmax(self.predict_proba(X), axis=1) else: predictions = self._predict(X) maj = np.apply_along_axis((lambda x: np.argmax(np.bincount(x, weights=self._weights_not_none))), axis=1, arr=predictions) maj = self.le_.inverse_transform(maj) return maj
'Collect results from clf.predict calls.'
def _collect_probas(self, X):
return np.asarray([clf.predict_proba(X) for clf in self.estimators_])
'Predict class probabilities for X in \'soft\' voting'
def _predict_proba(self, X):
if (self.voting == 'hard'): raise AttributeError(('predict_proba is not available when voting=%r' % self.voting)) check_is_fitted(self, 'estimators_') avg = np.average(self._collect_probas(X), axis=0, weights=self._weights_not_none) return avg
'Compute probabilities of possible outcomes for samples in X. Parameters X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns avg : array-like, shape = [n_samples, n_classes] Weighted average probability for each class per sample.'
@property def predict_proba(self):
return self._predict_proba
'Return class labels or probabilities for X for each estimator. Parameters X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns If `voting=\'soft\'` and `flatten_transform=True`: array-like = (n_classifiers, n_samples * n_classes) otherwise array-like = (n_classifiers, n_samples, n_classes) Class probabilities calculated by each classifier. If `voting=\'hard\'`: array-like = [n_samples, n_classifiers] Class labels predicted by each classifier.'
def transform(self, X):
check_is_fitted(self, 'estimators_') if (self.voting == 'soft'): probas = self._collect_probas(X) if (self.flatten_transform is None): warnings.warn("'flatten_transform' default value will be changed to True in 0.21.To silence this warning you may explicitly set flatten_transform=False.", DeprecationWarning) return probas elif (not self.flatten_transform): return probas else: return np.hstack(probas) else: return self._predict(X)
'Setting the parameters for the voting classifier Valid parameter keys can be listed with get_params(). Parameters params: keyword arguments Specific parameters using e.g. set_params(parameter_name=new_value) In addition, to setting the parameters of the ``VotingClassifier``, the individual classifiers of the ``VotingClassifier`` can also be set or replaced by setting them to None. Examples # In this example, the RandomForestClassifier is removed clf1 = LogisticRegression() clf2 = RandomForestClassifier() eclf = VotingClassifier(estimators=[(\'lr\', clf1), (\'rf\', clf2)] eclf.set_params(rf=None)'
def set_params(self, **params):
super(VotingClassifier, self)._set_params('estimators', **params) return self
'Get the parameters of the VotingClassifier Parameters deep: bool Setting it to True gets the various classifiers and the parameters of the classifiers as well'
def get_params(self, deep=True):
return super(VotingClassifier, self)._get_params('estimators', deep=deep)
'Collect results from clf.predict calls.'
def _predict(self, X):
return np.asarray([clf.predict(X) for clf in self.estimators_]).T
'Build a boosted classifier/regressor from the training set (X, y). Parameters X : {array-like, sparse matrix} of shape = [n_samples, n_features] The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. COO, DOK, and LIL are converted to CSR. The dtype is forced to DTYPE from tree._tree if the base classifier of this ensemble weighted boosting classifier is a tree or forest. y : array-like of shape = [n_samples] The target values (class labels in classification, real numbers in regression). sample_weight : array-like of shape = [n_samples], optional Sample weights. If None, the sample weights are initialized to 1 / n_samples. Returns self : object Returns self.'
def fit(self, X, y, sample_weight=None):
if (self.learning_rate <= 0): raise ValueError('learning_rate must be greater than zero') if ((self.base_estimator is None) or isinstance(self.base_estimator, (BaseDecisionTree, BaseForest))): dtype = DTYPE accept_sparse = 'csc' else: dtype = None accept_sparse = ['csr', 'csc'] (X, y) = check_X_y(X, y, accept_sparse=accept_sparse, dtype=dtype, y_numeric=is_regressor(self)) if (sample_weight is None): sample_weight = np.empty(X.shape[0], dtype=np.float64) sample_weight[:] = (1.0 / X.shape[0]) else: sample_weight = check_array(sample_weight, ensure_2d=False) sample_weight = (sample_weight / sample_weight.sum(dtype=np.float64)) if (sample_weight.sum() <= 0): raise ValueError('Attempting to fit with a non-positive weighted number of samples.') self._validate_estimator() self.estimators_ = [] self.estimator_weights_ = np.zeros(self.n_estimators, dtype=np.float64) self.estimator_errors_ = np.ones(self.n_estimators, dtype=np.float64) random_state = check_random_state(self.random_state) for iboost in range(self.n_estimators): (sample_weight, estimator_weight, estimator_error) = self._boost(iboost, X, y, sample_weight, random_state) if (sample_weight is None): break self.estimator_weights_[iboost] = estimator_weight self.estimator_errors_[iboost] = estimator_error if (estimator_error == 0): break sample_weight_sum = np.sum(sample_weight) if (sample_weight_sum <= 0): break if (iboost < (self.n_estimators - 1)): sample_weight /= sample_weight_sum return self
'Implement a single boost. Warning: This method needs to be overridden by subclasses. Parameters iboost : int The index of the current boost iteration. X : {array-like, sparse matrix} of shape = [n_samples, n_features] The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. COO, DOK, and LIL are converted to CSR. y : array-like of shape = [n_samples] The target values (class labels). sample_weight : array-like of shape = [n_samples] The current sample weights. random_state : numpy.RandomState The current random number generator Returns sample_weight : array-like of shape = [n_samples] or None The reweighted sample weights. If None then boosting has terminated early. estimator_weight : float The weight for the current boost. If None then boosting has terminated early. error : float The classification error for the current boost. If None then boosting has terminated early.'
@abstractmethod def _boost(self, iboost, X, y, sample_weight, random_state):
pass
'Return staged scores for X, y. This generator method yields the ensemble score after each iteration of boosting and therefore allows monitoring, such as to determine the score on a test set after each boost. Parameters X : {array-like, sparse matrix} of shape = [n_samples, n_features] The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. DOK and LIL are converted to CSR. y : array-like, shape = [n_samples] Labels for X. sample_weight : array-like, shape = [n_samples], optional Sample weights. Returns z : float'
def staged_score(self, X, y, sample_weight=None):
for y_pred in self.staged_predict(X): if is_classifier(self): (yield accuracy_score(y, y_pred, sample_weight=sample_weight)) else: (yield r2_score(y, y_pred, sample_weight=sample_weight))
'Return the feature importances (the higher, the more important the feature). Returns feature_importances_ : array, shape = [n_features]'
@property def feature_importances_(self):
if ((self.estimators_ is None) or (len(self.estimators_) == 0)): raise ValueError('Estimator not fitted, call `fit` before `feature_importances_`.') try: norm = self.estimator_weights_.sum() return (sum(((weight * clf.feature_importances_) for (weight, clf) in zip(self.estimator_weights_, self.estimators_))) / norm) except AttributeError: raise AttributeError('Unable to compute feature importances since base_estimator does not have a feature_importances_ attribute')
'Ensure that X is in the proper format'
def _validate_X_predict(self, X):
if ((self.base_estimator is None) or isinstance(self.base_estimator, (BaseDecisionTree, BaseForest))): X = check_array(X, accept_sparse='csr', dtype=DTYPE) else: X = check_array(X, accept_sparse=['csr', 'csc', 'coo']) return X
'Build a boosted classifier from the training set (X, y). Parameters X : {array-like, sparse matrix} of shape = [n_samples, n_features] The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. DOK and LIL are converted to CSR. y : array-like of shape = [n_samples] The target values (class labels). sample_weight : array-like of shape = [n_samples], optional Sample weights. If None, the sample weights are initialized to ``1 / n_samples``. Returns self : object Returns self.'
def fit(self, X, y, sample_weight=None):
if (self.algorithm not in ('SAMME', 'SAMME.R')): raise ValueError(('algorithm %s is not supported' % self.algorithm)) return super(AdaBoostClassifier, self).fit(X, y, sample_weight)
'Check the estimator and set the base_estimator_ attribute.'
def _validate_estimator(self):
super(AdaBoostClassifier, self)._validate_estimator(default=DecisionTreeClassifier(max_depth=1)) if (self.algorithm == 'SAMME.R'): if (not hasattr(self.base_estimator_, 'predict_proba')): raise TypeError("AdaBoostClassifier with algorithm='SAMME.R' requires that the weak learner supports the calculation of class probabilities with a predict_proba method.\nPlease change the base estimator or set algorithm='SAMME' instead.") if (not has_fit_parameter(self.base_estimator_, 'sample_weight')): raise ValueError(("%s doesn't support sample_weight." % self.base_estimator_.__class__.__name__))
'Implement a single boost. Perform a single boost according to the real multi-class SAMME.R algorithm or to the discrete SAMME algorithm and return the updated sample weights. Parameters iboost : int The index of the current boost iteration. X : {array-like, sparse matrix} of shape = [n_samples, n_features] The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. DOK and LIL are converted to CSR. y : array-like of shape = [n_samples] The target values (class labels). sample_weight : array-like of shape = [n_samples] The current sample weights. random_state : numpy.RandomState The current random number generator Returns sample_weight : array-like of shape = [n_samples] or None The reweighted sample weights. If None then boosting has terminated early. estimator_weight : float The weight for the current boost. If None then boosting has terminated early. estimator_error : float The classification error for the current boost. If None then boosting has terminated early.'
def _boost(self, iboost, X, y, sample_weight, random_state):
if (self.algorithm == 'SAMME.R'): return self._boost_real(iboost, X, y, sample_weight, random_state) else: return self._boost_discrete(iboost, X, y, sample_weight, random_state)
'Implement a single boost using the SAMME.R real algorithm.'
def _boost_real(self, iboost, X, y, sample_weight, random_state):
estimator = self._make_estimator(random_state=random_state) estimator.fit(X, y, sample_weight=sample_weight) y_predict_proba = estimator.predict_proba(X) if (iboost == 0): self.classes_ = getattr(estimator, 'classes_', None) self.n_classes_ = len(self.classes_) y_predict = self.classes_.take(np.argmax(y_predict_proba, axis=1), axis=0) incorrect = (y_predict != y) estimator_error = np.mean(np.average(incorrect, weights=sample_weight, axis=0)) if (estimator_error <= 0): return (sample_weight, 1.0, 0.0) n_classes = self.n_classes_ classes = self.classes_ y_codes = np.array([((-1.0) / (n_classes - 1)), 1.0]) y_coding = y_codes.take((classes == y[:, np.newaxis])) proba = y_predict_proba proba[(proba < np.finfo(proba.dtype).eps)] = np.finfo(proba.dtype).eps estimator_weight = (((-1.0) * self.learning_rate) * (((n_classes - 1.0) / n_classes) * inner1d(y_coding, np.log(y_predict_proba)))) if (not (iboost == (self.n_estimators - 1))): sample_weight *= np.exp((estimator_weight * ((sample_weight > 0) | (estimator_weight < 0)))) return (sample_weight, 1.0, estimator_error)
'Implement a single boost using the SAMME discrete algorithm.'
def _boost_discrete(self, iboost, X, y, sample_weight, random_state):
estimator = self._make_estimator(random_state=random_state) estimator.fit(X, y, sample_weight=sample_weight) y_predict = estimator.predict(X) if (iboost == 0): self.classes_ = getattr(estimator, 'classes_', None) self.n_classes_ = len(self.classes_) incorrect = (y_predict != y) estimator_error = np.mean(np.average(incorrect, weights=sample_weight, axis=0)) if (estimator_error <= 0): return (sample_weight, 1.0, 0.0) n_classes = self.n_classes_ if (estimator_error >= (1.0 - (1.0 / n_classes))): self.estimators_.pop((-1)) if (len(self.estimators_) == 0): raise ValueError('BaseClassifier in AdaBoostClassifier ensemble is worse than random, ensemble can not be fit.') return (None, None, None) estimator_weight = (self.learning_rate * (np.log(((1.0 - estimator_error) / estimator_error)) + np.log((n_classes - 1.0)))) if (not (iboost == (self.n_estimators - 1))): sample_weight *= np.exp(((estimator_weight * incorrect) * ((sample_weight > 0) | (estimator_weight < 0)))) return (sample_weight, estimator_weight, estimator_error)
'Predict classes for X. The predicted class of an input sample is computed as the weighted mean prediction of the classifiers in the ensemble. Parameters X : {array-like, sparse matrix} of shape = [n_samples, n_features] The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. DOK and LIL are converted to CSR. Returns y : array of shape = [n_samples] The predicted classes.'
def predict(self, X):
pred = self.decision_function(X) if (self.n_classes_ == 2): return self.classes_.take((pred > 0), axis=0) return self.classes_.take(np.argmax(pred, axis=1), axis=0)
'Return staged predictions for X. The predicted class of an input sample is computed as the weighted mean prediction of the classifiers in the ensemble. This generator method yields the ensemble prediction after each iteration of boosting and therefore allows monitoring, such as to determine the prediction on a test set after each boost. Parameters X : array-like of shape = [n_samples, n_features] The input samples. Returns y : generator of array, shape = [n_samples] The predicted classes.'
def staged_predict(self, X):
n_classes = self.n_classes_ classes = self.classes_ if (n_classes == 2): for pred in self.staged_decision_function(X): (yield np.array(classes.take((pred > 0), axis=0))) else: for pred in self.staged_decision_function(X): (yield np.array(classes.take(np.argmax(pred, axis=1), axis=0)))
'Compute the decision function of ``X``. Parameters X : {array-like, sparse matrix} of shape = [n_samples, n_features] The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. DOK and LIL are converted to CSR. Returns score : array, shape = [n_samples, k] The decision function of the input samples. The order of outputs is the same of that of the `classes_` attribute. Binary classification is a special cases with ``k == 1``, otherwise ``k==n_classes``. For binary classification, values closer to -1 or 1 mean more like the first or second class in ``classes_``, respectively.'
def decision_function(self, X):
check_is_fitted(self, 'n_classes_') X = self._validate_X_predict(X) n_classes = self.n_classes_ classes = self.classes_[:, np.newaxis] pred = None if (self.algorithm == 'SAMME.R'): pred = sum((_samme_proba(estimator, n_classes, X) for estimator in self.estimators_)) else: pred = sum((((estimator.predict(X) == classes).T * w) for (estimator, w) in zip(self.estimators_, self.estimator_weights_))) pred /= self.estimator_weights_.sum() if (n_classes == 2): pred[:, 0] *= (-1) return pred.sum(axis=1) return pred
'Compute decision function of ``X`` for each boosting iteration. This method allows monitoring (i.e. determine error on testing set) after each boosting iteration. Parameters X : {array-like, sparse matrix} of shape = [n_samples, n_features] The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. DOK and LIL are converted to CSR. Returns score : generator of array, shape = [n_samples, k] The decision function of the input samples. The order of outputs is the same of that of the `classes_` attribute. Binary classification is a special cases with ``k == 1``, otherwise ``k==n_classes``. For binary classification, values closer to -1 or 1 mean more like the first or second class in ``classes_``, respectively.'
def staged_decision_function(self, X):
check_is_fitted(self, 'n_classes_') X = self._validate_X_predict(X) n_classes = self.n_classes_ classes = self.classes_[:, np.newaxis] pred = None norm = 0.0 for (weight, estimator) in zip(self.estimator_weights_, self.estimators_): norm += weight if (self.algorithm == 'SAMME.R'): current_pred = _samme_proba(estimator, n_classes, X) else: current_pred = estimator.predict(X) current_pred = ((current_pred == classes).T * weight) if (pred is None): pred = current_pred else: pred += current_pred if (n_classes == 2): tmp_pred = np.copy(pred) tmp_pred[:, 0] *= (-1) (yield (tmp_pred / norm).sum(axis=1)) else: (yield (pred / norm))
'Predict class probabilities for X. The predicted class probabilities of an input sample is computed as the weighted mean predicted class probabilities of the classifiers in the ensemble. Parameters X : {array-like, sparse matrix} of shape = [n_samples, n_features] The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. DOK and LIL are converted to CSR. Returns p : array of shape = [n_samples] The class probabilities of the input samples. The order of outputs is the same of that of the `classes_` attribute.'
def predict_proba(self, X):
check_is_fitted(self, 'n_classes_') n_classes = self.n_classes_ X = self._validate_X_predict(X) if (n_classes == 1): return np.ones((X.shape[0], 1)) if (self.algorithm == 'SAMME.R'): proba = sum((_samme_proba(estimator, n_classes, X) for estimator in self.estimators_)) else: proba = sum(((estimator.predict_proba(X) * w) for (estimator, w) in zip(self.estimators_, self.estimator_weights_))) proba /= self.estimator_weights_.sum() proba = np.exp(((1.0 / (n_classes - 1)) * proba)) normalizer = proba.sum(axis=1)[:, np.newaxis] normalizer[(normalizer == 0.0)] = 1.0 proba /= normalizer return proba
'Predict class probabilities for X. The predicted class probabilities of an input sample is computed as the weighted mean predicted class probabilities of the classifiers in the ensemble. This generator method yields the ensemble predicted class probabilities after each iteration of boosting and therefore allows monitoring, such as to determine the predicted class probabilities on a test set after each boost. Parameters X : {array-like, sparse matrix} of shape = [n_samples, n_features] The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. DOK and LIL are converted to CSR. Returns p : generator of array, shape = [n_samples] The class probabilities of the input samples. The order of outputs is the same of that of the `classes_` attribute.'
def staged_predict_proba(self, X):
X = self._validate_X_predict(X) n_classes = self.n_classes_ proba = None norm = 0.0 for (weight, estimator) in zip(self.estimator_weights_, self.estimators_): norm += weight if (self.algorithm == 'SAMME.R'): current_proba = _samme_proba(estimator, n_classes, X) else: current_proba = (estimator.predict_proba(X) * weight) if (proba is None): proba = current_proba else: proba += current_proba real_proba = np.exp(((1.0 / (n_classes - 1)) * (proba / norm))) normalizer = real_proba.sum(axis=1)[:, np.newaxis] normalizer[(normalizer == 0.0)] = 1.0 real_proba /= normalizer (yield real_proba)
'Predict class log-probabilities for X. The predicted class log-probabilities of an input sample is computed as the weighted mean predicted class log-probabilities of the classifiers in the ensemble. Parameters X : {array-like, sparse matrix} of shape = [n_samples, n_features] The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. DOK and LIL are converted to CSR. Returns p : array of shape = [n_samples] The class probabilities of the input samples. The order of outputs is the same of that of the `classes_` attribute.'
def predict_log_proba(self, X):
return np.log(self.predict_proba(X))
'Build a boosted regressor from the training set (X, y). Parameters X : {array-like, sparse matrix} of shape = [n_samples, n_features] The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. DOK and LIL are converted to CSR. y : array-like of shape = [n_samples] The target values (real numbers). sample_weight : array-like of shape = [n_samples], optional Sample weights. If None, the sample weights are initialized to 1 / n_samples. Returns self : object Returns self.'
def fit(self, X, y, sample_weight=None):
if (self.loss not in ('linear', 'square', 'exponential')): raise ValueError("loss must be 'linear', 'square', or 'exponential'") return super(AdaBoostRegressor, self).fit(X, y, sample_weight)
'Check the estimator and set the base_estimator_ attribute.'
def _validate_estimator(self):
super(AdaBoostRegressor, self)._validate_estimator(default=DecisionTreeRegressor(max_depth=3))
'Implement a single boost for regression Perform a single boost according to the AdaBoost.R2 algorithm and return the updated sample weights. Parameters iboost : int The index of the current boost iteration. X : {array-like, sparse matrix} of shape = [n_samples, n_features] The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. DOK and LIL are converted to CSR. y : array-like of shape = [n_samples] The target values (class labels in classification, real numbers in regression). sample_weight : array-like of shape = [n_samples] The current sample weights. random_state : numpy.RandomState The current random number generator Returns sample_weight : array-like of shape = [n_samples] or None The reweighted sample weights. If None then boosting has terminated early. estimator_weight : float The weight for the current boost. If None then boosting has terminated early. estimator_error : float The regression error for the current boost. If None then boosting has terminated early.'
def _boost(self, iboost, X, y, sample_weight, random_state):
estimator = self._make_estimator(random_state=random_state) cdf = stable_cumsum(sample_weight) cdf /= cdf[(-1)] uniform_samples = random_state.random_sample(X.shape[0]) bootstrap_idx = cdf.searchsorted(uniform_samples, side='right') bootstrap_idx = np.array(bootstrap_idx, copy=False) estimator.fit(X[bootstrap_idx], y[bootstrap_idx]) y_predict = estimator.predict(X) error_vect = np.abs((y_predict - y)) error_max = error_vect.max() if (error_max != 0.0): error_vect /= error_max if (self.loss == 'square'): error_vect **= 2 elif (self.loss == 'exponential'): error_vect = (1.0 - np.exp((- error_vect))) estimator_error = (sample_weight * error_vect).sum() if (estimator_error <= 0): return (sample_weight, 1.0, 0.0) elif (estimator_error >= 0.5): if (len(self.estimators_) > 1): self.estimators_.pop((-1)) return (None, None, None) beta = (estimator_error / (1.0 - estimator_error)) estimator_weight = (self.learning_rate * np.log((1.0 / beta))) if (not (iboost == (self.n_estimators - 1))): sample_weight *= np.power(beta, ((1.0 - error_vect) * self.learning_rate)) return (sample_weight, estimator_weight, estimator_error)
'Predict regression value for X. The predicted regression value of an input sample is computed as the weighted median prediction of the classifiers in the ensemble. Parameters X : {array-like, sparse matrix} of shape = [n_samples, n_features] The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. DOK and LIL are converted to CSR. Returns y : array of shape = [n_samples] The predicted regression values.'
def predict(self, X):
check_is_fitted(self, 'estimator_weights_') X = self._validate_X_predict(X) return self._get_median_predict(X, len(self.estimators_))
'Return staged predictions for X. The predicted regression value of an input sample is computed as the weighted median prediction of the classifiers in the ensemble. This generator method yields the ensemble prediction after each iteration of boosting and therefore allows monitoring, such as to determine the prediction on a test set after each boost. Parameters X : {array-like, sparse matrix} of shape = [n_samples, n_features] The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. DOK and LIL are converted to CSR. Returns y : generator of array, shape = [n_samples] The predicted regression values.'
def staged_predict(self, X):
check_is_fitted(self, 'estimator_weights_') X = self._validate_X_predict(X) for (i, _) in enumerate(self.estimators_, 1): (yield self._get_median_predict(X, limit=i))
'Apply trees in the forest to X, return leaf indices. Parameters X : array-like or sparse matrix, shape = [n_samples, n_features] The input samples. Internally, its dtype will be converted to ``dtype=np.float32``. If a sparse matrix is provided, it will be converted into a sparse ``csr_matrix``. Returns X_leaves : array_like, shape = [n_samples, n_estimators] For each datapoint x in X and for each tree in the forest, return the index of the leaf x ends up in.'
def apply(self, X):
X = self._validate_X_predict(X) results = Parallel(n_jobs=self.n_jobs, verbose=self.verbose, backend='threading')((delayed(parallel_helper)(tree, 'apply', X, check_input=False) for tree in self.estimators_)) return np.array(results).T
'Return the decision path in the forest .. versionadded:: 0.18 Parameters X : array-like or sparse matrix, shape = [n_samples, n_features] The input samples. Internally, its dtype will be converted to ``dtype=np.float32``. If a sparse matrix is provided, it will be converted into a sparse ``csr_matrix``. Returns indicator : sparse csr array, shape = [n_samples, n_nodes] Return a node indicator matrix where non zero elements indicates that the samples goes through the nodes. n_nodes_ptr : array of size (n_estimators + 1, ) The columns from indicator[n_nodes_ptr[i]:n_nodes_ptr[i+1]] gives the indicator value for the i-th estimator.'
def decision_path(self, X):
X = self._validate_X_predict(X) indicators = Parallel(n_jobs=self.n_jobs, verbose=self.verbose, backend='threading')((delayed(parallel_helper)(tree, 'decision_path', X, check_input=False) for tree in self.estimators_)) n_nodes = [0] n_nodes.extend([i.shape[1] for i in indicators]) n_nodes_ptr = np.array(n_nodes).cumsum() return (sparse_hstack(indicators).tocsr(), n_nodes_ptr)
'Build a forest of trees from the training set (X, y). Parameters X : array-like or sparse matrix of shape = [n_samples, n_features] The training input samples. Internally, its dtype will be converted to ``dtype=np.float32``. If a sparse matrix is provided, it will be converted into a sparse ``csc_matrix``. y : array-like, shape = [n_samples] or [n_samples, n_outputs] The target values (class labels in classification, real numbers in regression). sample_weight : array-like, shape = [n_samples] or None Sample weights. If None, then samples are equally weighted. Splits that would create child nodes with net zero or negative weight are ignored while searching for a split in each node. In the case of classification, splits are also ignored if they would result in any single class carrying a negative weight in either child node. Returns self : object Returns self.'
def fit(self, X, y, sample_weight=None):
X = check_array(X, accept_sparse='csc', dtype=DTYPE) y = check_array(y, accept_sparse='csc', ensure_2d=False, dtype=None) if (sample_weight is not None): sample_weight = check_array(sample_weight, ensure_2d=False) if issparse(X): X.sort_indices() (n_samples, self.n_features_) = X.shape y = np.atleast_1d(y) if ((y.ndim == 2) and (y.shape[1] == 1)): warn('A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().', DataConversionWarning, stacklevel=2) if (y.ndim == 1): y = np.reshape(y, ((-1), 1)) self.n_outputs_ = y.shape[1] (y, expanded_class_weight) = self._validate_y_class_weight(y) if ((getattr(y, 'dtype', None) != DOUBLE) or (not y.flags.contiguous)): y = np.ascontiguousarray(y, dtype=DOUBLE) if (expanded_class_weight is not None): if (sample_weight is not None): sample_weight = (sample_weight * expanded_class_weight) else: sample_weight = expanded_class_weight self._validate_estimator() if ((not self.bootstrap) and self.oob_score): raise ValueError('Out of bag estimation only available if bootstrap=True') random_state = check_random_state(self.random_state) if ((not self.warm_start) or (not hasattr(self, 'estimators_'))): self.estimators_ = [] n_more_estimators = (self.n_estimators - len(self.estimators_)) if (n_more_estimators < 0): raise ValueError(('n_estimators=%d must be larger or equal to len(estimators_)=%d when warm_start==True' % (self.n_estimators, len(self.estimators_)))) elif (n_more_estimators == 0): warn('Warm-start fitting without increasing n_estimators does not fit new trees.') else: if (self.warm_start and (len(self.estimators_) > 0)): random_state.randint(MAX_INT, size=len(self.estimators_)) trees = [] for i in range(n_more_estimators): tree = self._make_estimator(append=False, random_state=random_state) trees.append(tree) trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose, backend='threading')((delayed(_parallel_build_trees)(t, self, X, y, sample_weight, i, len(trees), verbose=self.verbose, class_weight=self.class_weight) for (i, t) in enumerate(trees))) self.estimators_.extend(trees) if self.oob_score: self._set_oob_score(X, y) if (hasattr(self, 'classes_') and (self.n_outputs_ == 1)): self.n_classes_ = self.n_classes_[0] self.classes_ = self.classes_[0] return self
'Validate X whenever one tries to predict, apply, predict_proba'
def _validate_X_predict(self, X):
if ((self.estimators_ is None) or (len(self.estimators_) == 0)): raise NotFittedError('Estimator not fitted, call `fit` before exploiting the model.') return self.estimators_[0]._validate_X_predict(X, check_input=True)
'Return the feature importances (the higher, the more important the feature). Returns feature_importances_ : array, shape = [n_features]'
@property def feature_importances_(self):
check_is_fitted(self, 'estimators_') all_importances = Parallel(n_jobs=self.n_jobs, backend='threading')((delayed(getattr)(tree, 'feature_importances_') for tree in self.estimators_)) return (sum(all_importances) / len(self.estimators_))
'Compute out-of-bag score'
def _set_oob_score(self, X, y):
X = check_array(X, dtype=DTYPE, accept_sparse='csr') n_classes_ = self.n_classes_ n_samples = y.shape[0] oob_decision_function = [] oob_score = 0.0 predictions = [] for k in range(self.n_outputs_): predictions.append(np.zeros((n_samples, n_classes_[k]))) for estimator in self.estimators_: unsampled_indices = _generate_unsampled_indices(estimator.random_state, n_samples) p_estimator = estimator.predict_proba(X[unsampled_indices, :], check_input=False) if (self.n_outputs_ == 1): p_estimator = [p_estimator] for k in range(self.n_outputs_): predictions[k][unsampled_indices, :] += p_estimator[k] for k in range(self.n_outputs_): if (predictions[k].sum(axis=1) == 0).any(): warn('Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.') decision = (predictions[k] / predictions[k].sum(axis=1)[:, np.newaxis]) oob_decision_function.append(decision) oob_score += np.mean((y[:, k] == np.argmax(predictions[k], axis=1)), axis=0) if (self.n_outputs_ == 1): self.oob_decision_function_ = oob_decision_function[0] else: self.oob_decision_function_ = oob_decision_function self.oob_score_ = (oob_score / self.n_outputs_)
'Predict class for X. The predicted class of an input sample is a vote by the trees in the forest, weighted by their probability estimates. That is, the predicted class is the one with highest mean probability estimate across the trees. Parameters X : array-like or sparse matrix of shape = [n_samples, n_features] The input samples. Internally, its dtype will be converted to ``dtype=np.float32``. If a sparse matrix is provided, it will be converted into a sparse ``csr_matrix``. Returns y : array of shape = [n_samples] or [n_samples, n_outputs] The predicted classes.'
def predict(self, X):
proba = self.predict_proba(X) if (self.n_outputs_ == 1): return self.classes_.take(np.argmax(proba, axis=1), axis=0) else: n_samples = proba[0].shape[0] predictions = np.zeros((n_samples, self.n_outputs_)) for k in range(self.n_outputs_): predictions[:, k] = self.classes_[k].take(np.argmax(proba[k], axis=1), axis=0) return predictions
'Predict class probabilities for X. The predicted class probabilities of an input sample are computed as the mean predicted class probabilities of the trees in the forest. The class probability of a single tree is the fraction of samples of the same class in a leaf. Parameters X : array-like or sparse matrix of shape = [n_samples, n_features] The input samples. Internally, its dtype will be converted to ``dtype=np.float32``. If a sparse matrix is provided, it will be converted into a sparse ``csr_matrix``. Returns p : array of shape = [n_samples, n_classes], or a list of n_outputs such arrays if n_outputs > 1. The class probabilities of the input samples. The order of the classes corresponds to that in the attribute `classes_`.'
def predict_proba(self, X):
check_is_fitted(self, 'estimators_') X = self._validate_X_predict(X) (n_jobs, _, _) = _partition_estimators(self.n_estimators, self.n_jobs) all_proba = [np.zeros((X.shape[0], j), dtype=np.float64) for j in np.atleast_1d(self.n_classes_)] Parallel(n_jobs=n_jobs, verbose=self.verbose, backend='threading')((delayed(accumulate_prediction)(e.predict_proba, X, all_proba) for e in self.estimators_)) for proba in all_proba: proba /= len(self.estimators_) if (len(all_proba) == 1): return all_proba[0] else: return all_proba
'Predict class log-probabilities for X. The predicted class log-probabilities of an input sample is computed as the log of the mean predicted class probabilities of the trees in the forest. Parameters X : array-like or sparse matrix of shape = [n_samples, n_features] The input samples. Internally, its dtype will be converted to ``dtype=np.float32``. If a sparse matrix is provided, it will be converted into a sparse ``csr_matrix``. Returns p : array of shape = [n_samples, n_classes], or a list of n_outputs such arrays if n_outputs > 1. The class probabilities of the input samples. The order of the classes corresponds to that in the attribute `classes_`.'
def predict_log_proba(self, X):
proba = self.predict_proba(X) if (self.n_outputs_ == 1): return np.log(proba) else: for k in range(self.n_outputs_): proba[k] = np.log(proba[k]) return proba
'Predict regression target for X. The predicted regression target of an input sample is computed as the mean predicted regression targets of the trees in the forest. Parameters X : array-like or sparse matrix of shape = [n_samples, n_features] The input samples. Internally, its dtype will be converted to ``dtype=np.float32``. If a sparse matrix is provided, it will be converted into a sparse ``csr_matrix``. Returns y : array of shape = [n_samples] or [n_samples, n_outputs] The predicted values.'
def predict(self, X):
check_is_fitted(self, 'estimators_') X = self._validate_X_predict(X) (n_jobs, _, _) = _partition_estimators(self.n_estimators, self.n_jobs) if (self.n_outputs_ > 1): y_hat = np.zeros((X.shape[0], self.n_outputs_), dtype=np.float64) else: y_hat = np.zeros(X.shape[0], dtype=np.float64) Parallel(n_jobs=n_jobs, verbose=self.verbose, backend='threading')((delayed(accumulate_prediction)(e.predict, X, [y_hat]) for e in self.estimators_)) y_hat /= len(self.estimators_) return y_hat
'Compute out-of-bag scores'
def _set_oob_score(self, X, y):
X = check_array(X, dtype=DTYPE, accept_sparse='csr') n_samples = y.shape[0] predictions = np.zeros((n_samples, self.n_outputs_)) n_predictions = np.zeros((n_samples, self.n_outputs_)) for estimator in self.estimators_: unsampled_indices = _generate_unsampled_indices(estimator.random_state, n_samples) p_estimator = estimator.predict(X[unsampled_indices, :], check_input=False) if (self.n_outputs_ == 1): p_estimator = p_estimator[:, np.newaxis] predictions[unsampled_indices, :] += p_estimator n_predictions[unsampled_indices, :] += 1 if (n_predictions == 0).any(): warn('Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.') n_predictions[(n_predictions == 0)] = 1 predictions /= n_predictions self.oob_prediction_ = predictions if (self.n_outputs_ == 1): self.oob_prediction_ = self.oob_prediction_.reshape((n_samples,)) self.oob_score_ = 0.0 for k in range(self.n_outputs_): self.oob_score_ += r2_score(y[:, k], predictions[:, k]) self.oob_score_ /= self.n_outputs_
'Fit estimator. Parameters X : array-like or sparse matrix, shape=(n_samples, n_features) The input samples. Use ``dtype=np.float32`` for maximum efficiency. Sparse matrices are also supported, use sparse ``csc_matrix`` for maximum efficiency. sample_weight : array-like, shape = [n_samples] or None Sample weights. If None, then samples are equally weighted. Splits that would create child nodes with net zero or negative weight are ignored while searching for a split in each node. In the case of classification, splits are also ignored if they would result in any single class carrying a negative weight in either child node. Returns self : object Returns self.'
def fit(self, X, y=None, sample_weight=None):
self.fit_transform(X, y, sample_weight=sample_weight) return self
'Fit estimator and transform dataset. Parameters X : array-like or sparse matrix, shape=(n_samples, n_features) Input data used to build forests. Use ``dtype=np.float32`` for maximum efficiency. sample_weight : array-like, shape = [n_samples] or None Sample weights. If None, then samples are equally weighted. Splits that would create child nodes with net zero or negative weight are ignored while searching for a split in each node. In the case of classification, splits are also ignored if they would result in any single class carrying a negative weight in either child node. Returns X_transformed : sparse matrix, shape=(n_samples, n_out) Transformed dataset.'
def fit_transform(self, X, y=None, sample_weight=None):
X = check_array(X, accept_sparse=['csc']) if issparse(X): X.sort_indices() rnd = check_random_state(self.random_state) y = rnd.uniform(size=X.shape[0]) super(RandomTreesEmbedding, self).fit(X, y, sample_weight=sample_weight) self.one_hot_encoder_ = OneHotEncoder(sparse=self.sparse_output) return self.one_hot_encoder_.fit_transform(self.apply(X))
'Transform dataset. Parameters X : array-like or sparse matrix, shape=(n_samples, n_features) Input data to be transformed. Use ``dtype=np.float32`` for maximum efficiency. Sparse matrices are also supported, use sparse ``csr_matrix`` for maximum efficiency. Returns X_transformed : sparse matrix, shape=(n_samples, n_out) Transformed dataset.'
def transform(self, X):
return self.one_hot_encoder_.transform(self.apply(X))
'Fit estimator. Parameters X : array-like or sparse matrix, shape (n_samples, n_features) The input samples. Use ``dtype=np.float32`` for maximum efficiency. Sparse matrices are also supported, use sparse ``csc_matrix`` for maximum efficiency. sample_weight : array-like, shape = [n_samples] or None Sample weights. If None, then samples are equally weighted. Returns self : object Returns self.'
def fit(self, X, y=None, sample_weight=None):
X = check_array(X, accept_sparse=['csc']) if issparse(X): X.sort_indices() rnd = check_random_state(self.random_state) y = rnd.uniform(size=X.shape[0]) n_samples = X.shape[0] if isinstance(self.max_samples, six.string_types): if (self.max_samples == 'auto'): max_samples = min(256, n_samples) else: raise ValueError(('max_samples (%s) is not supported.Valid choices are: "auto", int orfloat' % self.max_samples)) elif isinstance(self.max_samples, INTEGER_TYPES): if (self.max_samples > n_samples): warn(('max_samples (%s) is greater than the total number of samples (%s). max_samples will be set to n_samples for estimation.' % (self.max_samples, n_samples))) max_samples = n_samples else: max_samples = self.max_samples else: if (not (0.0 < self.max_samples <= 1.0)): raise ValueError(('max_samples must be in (0, 1], got %r' % self.max_samples)) max_samples = int((self.max_samples * X.shape[0])) self.max_samples_ = max_samples max_depth = int(np.ceil(np.log2(max(max_samples, 2)))) super(IsolationForest, self)._fit(X, y, max_samples, max_depth=max_depth, sample_weight=sample_weight) self.threshold_ = (- sp.stats.scoreatpercentile((- self.decision_function(X)), (100.0 * (1.0 - self.contamination)))) return self
'Predict if a particular sample is an outlier or not. Parameters X : array-like or sparse matrix, shape (n_samples, n_features) The input samples. Internally, it will be converted to ``dtype=np.float32`` and if a sparse matrix is provided to a sparse ``csr_matrix``. Returns is_inlier : array, shape (n_samples,) For each observations, tells whether or not (+1 or -1) it should be considered as an inlier according to the fitted model.'
def predict(self, X):
X = check_array(X, accept_sparse='csr') is_inlier = np.ones(X.shape[0], dtype=int) is_inlier[(self.decision_function(X) <= self.threshold_)] = (-1) return is_inlier
'Average anomaly score of X of the base classifiers. The anomaly score of an input sample is computed as the mean anomaly score of the trees in the forest. The measure of normality of an observation given a tree is the depth of the leaf containing this observation, which is equivalent to the number of splittings required to isolate this point. In case of several observations n_left in the leaf, the average path length of a n_left samples isolation tree is added. Parameters X : {array-like, sparse matrix}, shape (n_samples, n_features) The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. Returns scores : array of shape (n_samples,) The anomaly score of the input samples. The lower, the more abnormal.'
def decision_function(self, X):
X = check_array(X, accept_sparse='csr') n_samples = X.shape[0] n_samples_leaf = np.zeros((n_samples, self.n_estimators), order='f') depths = np.zeros((n_samples, self.n_estimators), order='f') if (self._max_features == X.shape[1]): subsample_features = False else: subsample_features = True for (i, (tree, features)) in enumerate(zip(self.estimators_, self.estimators_features_)): if subsample_features: X_subset = X[:, features] else: X_subset = X leaves_index = tree.apply(X_subset) node_indicator = tree.decision_path(X_subset) n_samples_leaf[:, i] = tree.tree_.n_node_samples[leaves_index] depths[:, i] = np.ravel(node_indicator.sum(axis=1)) depths[:, i] -= 1 depths += _average_path_length(n_samples_leaf) scores = (2 ** ((- depths.mean(axis=1)) / _average_path_length(self.max_samples_))) return (0.5 - scores)
'Build a Bagging ensemble of estimators from the training set (X, y). Parameters X : {array-like, sparse matrix} of shape = [n_samples, n_features] The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. y : array-like, shape = [n_samples] The target values (class labels in classification, real numbers in regression). sample_weight : array-like, shape = [n_samples] or None Sample weights. If None, then samples are equally weighted. Note that this is supported only if the base estimator supports sample weighting. Returns self : object Returns self.'
def fit(self, X, y, sample_weight=None):
return self._fit(X, y, self.max_samples, sample_weight=sample_weight)
'Build a Bagging ensemble of estimators from the training set (X, y). Parameters X : {array-like, sparse matrix} of shape = [n_samples, n_features] The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. y : array-like, shape = [n_samples] The target values (class labels in classification, real numbers in regression). max_samples : int or float, optional (default=None) Argument to use instead of self.max_samples. max_depth : int, optional (default=None) Override value used when constructing base estimator. Only supported if the base estimator has a max_depth parameter. sample_weight : array-like, shape = [n_samples] or None Sample weights. If None, then samples are equally weighted. Note that this is supported only if the base estimator supports sample weighting. Returns self : object Returns self.'
def _fit(self, X, y, max_samples=None, max_depth=None, sample_weight=None):
random_state = check_random_state(self.random_state) (X, y) = check_X_y(X, y, ['csr', 'csc']) if (sample_weight is not None): sample_weight = check_array(sample_weight, ensure_2d=False) check_consistent_length(y, sample_weight) (n_samples, self.n_features_) = X.shape self._n_samples = n_samples y = self._validate_y(y) self._validate_estimator() if (max_depth is not None): self.base_estimator_.max_depth = max_depth if (max_samples is None): max_samples = self.max_samples elif (not isinstance(max_samples, (numbers.Integral, np.integer))): max_samples = int((max_samples * X.shape[0])) if (not (0 < max_samples <= X.shape[0])): raise ValueError('max_samples must be in (0, n_samples]') self._max_samples = max_samples if isinstance(self.max_features, (numbers.Integral, np.integer)): max_features = self.max_features else: max_features = int((self.max_features * self.n_features_)) if (not (0 < max_features <= self.n_features_)): raise ValueError('max_features must be in (0, n_features]') self._max_features = max_features if ((not self.bootstrap) and self.oob_score): raise ValueError('Out of bag estimation only available if bootstrap=True') if (self.warm_start and self.oob_score): raise ValueError('Out of bag estimate only available if warm_start=False') if (hasattr(self, 'oob_score_') and self.warm_start): del self.oob_score_ if ((not self.warm_start) or (not hasattr(self, 'estimators_'))): self.estimators_ = [] self.estimators_features_ = [] n_more_estimators = (self.n_estimators - len(self.estimators_)) if (n_more_estimators < 0): raise ValueError(('n_estimators=%d must be larger or equal to len(estimators_)=%d when warm_start==True' % (self.n_estimators, len(self.estimators_)))) elif (n_more_estimators == 0): warn('Warm-start fitting without increasing n_estimators does not fit new trees.') return self (n_jobs, n_estimators, starts) = _partition_estimators(n_more_estimators, self.n_jobs) total_n_estimators = sum(n_estimators) if (self.warm_start and (len(self.estimators_) > 0)): random_state.randint(MAX_INT, size=len(self.estimators_)) seeds = random_state.randint(MAX_INT, size=n_more_estimators) self._seeds = seeds all_results = Parallel(n_jobs=n_jobs, verbose=self.verbose)((delayed(_parallel_build_estimators)(n_estimators[i], self, X, y, sample_weight, seeds[starts[i]:starts[(i + 1)]], total_n_estimators, verbose=self.verbose) for i in range(n_jobs))) self.estimators_ += list(itertools.chain.from_iterable((t[0] for t in all_results))) self.estimators_features_ += list(itertools.chain.from_iterable((t[1] for t in all_results))) if self.oob_score: self._set_oob_score(X, y) return self
'The subset of drawn samples for each base estimator. Returns a dynamically generated list of boolean masks identifying the samples used for fitting each member of the ensemble, i.e., the in-bag samples. Note: the list is re-created at each call to the property in order to reduce the object memory footprint by not storing the sampling data. Thus fetching the property may be slower than expected.'
@property def estimators_samples_(self):
sample_masks = [] for (_, sample_indices) in self._get_estimators_indices(): mask = indices_to_mask(sample_indices, self._n_samples) sample_masks.append(mask) return sample_masks
'Check the estimator and set the base_estimator_ attribute.'
def _validate_estimator(self):
super(BaggingClassifier, self)._validate_estimator(default=DecisionTreeClassifier())
'Predict class for X. The predicted class of an input sample is computed as the class with the highest mean predicted probability. If base estimators do not implement a ``predict_proba`` method, then it resorts to voting. Parameters X : {array-like, sparse matrix} of shape = [n_samples, n_features] The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. Returns y : array of shape = [n_samples] The predicted classes.'
def predict(self, X):
predicted_probabilitiy = self.predict_proba(X) return self.classes_.take(np.argmax(predicted_probabilitiy, axis=1), axis=0)
'Predict class probabilities for X. The predicted class probabilities of an input sample is computed as the mean predicted class probabilities of the base estimators in the ensemble. If base estimators do not implement a ``predict_proba`` method, then it resorts to voting and the predicted class probabilities of an input sample represents the proportion of estimators predicting each class. Parameters X : {array-like, sparse matrix} of shape = [n_samples, n_features] The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. Returns p : array of shape = [n_samples, n_classes] The class probabilities of the input samples. The order of the classes corresponds to that in the attribute `classes_`.'
def predict_proba(self, X):
check_is_fitted(self, 'classes_') X = check_array(X, accept_sparse=['csr', 'csc']) if (self.n_features_ != X.shape[1]): raise ValueError('Number of features of the model must match the input. Model n_features is {0} and input n_features is {1}.'.format(self.n_features_, X.shape[1])) (n_jobs, n_estimators, starts) = _partition_estimators(self.n_estimators, self.n_jobs) all_proba = Parallel(n_jobs=n_jobs, verbose=self.verbose)((delayed(_parallel_predict_proba)(self.estimators_[starts[i]:starts[(i + 1)]], self.estimators_features_[starts[i]:starts[(i + 1)]], X, self.n_classes_) for i in range(n_jobs))) proba = (sum(all_proba) / self.n_estimators) return proba
'Predict class log-probabilities for X. The predicted class log-probabilities of an input sample is computed as the log of the mean predicted class probabilities of the base estimators in the ensemble. Parameters X : {array-like, sparse matrix} of shape = [n_samples, n_features] The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. Returns p : array of shape = [n_samples, n_classes] The class log-probabilities of the input samples. The order of the classes corresponds to that in the attribute `classes_`.'
def predict_log_proba(self, X):
check_is_fitted(self, 'classes_') if hasattr(self.base_estimator_, 'predict_log_proba'): X = check_array(X, accept_sparse=['csr', 'csc']) if (self.n_features_ != X.shape[1]): raise ValueError('Number of features of the model must match the input. Model n_features is {0} and input n_features is {1} '.format(self.n_features_, X.shape[1])) (n_jobs, n_estimators, starts) = _partition_estimators(self.n_estimators, self.n_jobs) all_log_proba = Parallel(n_jobs=n_jobs, verbose=self.verbose)((delayed(_parallel_predict_log_proba)(self.estimators_[starts[i]:starts[(i + 1)]], self.estimators_features_[starts[i]:starts[(i + 1)]], X, self.n_classes_) for i in range(n_jobs))) log_proba = all_log_proba[0] for j in range(1, len(all_log_proba)): log_proba = np.logaddexp(log_proba, all_log_proba[j]) log_proba -= np.log(self.n_estimators) return log_proba else: return np.log(self.predict_proba(X))
'Average of the decision functions of the base classifiers. Parameters X : {array-like, sparse matrix} of shape = [n_samples, n_features] The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. Returns score : array, shape = [n_samples, k] The decision function of the input samples. The columns correspond to the classes in sorted order, as they appear in the attribute ``classes_``. Regression and binary classification are special cases with ``k == 1``, otherwise ``k==n_classes``.'
@if_delegate_has_method(delegate='base_estimator') def decision_function(self, X):
check_is_fitted(self, 'classes_') X = check_array(X, accept_sparse=['csr', 'csc']) if (self.n_features_ != X.shape[1]): raise ValueError('Number of features of the model must match the input. Model n_features is {0} and input n_features is {1} '.format(self.n_features_, X.shape[1])) (n_jobs, n_estimators, starts) = _partition_estimators(self.n_estimators, self.n_jobs) all_decisions = Parallel(n_jobs=n_jobs, verbose=self.verbose)((delayed(_parallel_decision_function)(self.estimators_[starts[i]:starts[(i + 1)]], self.estimators_features_[starts[i]:starts[(i + 1)]], X) for i in range(n_jobs))) decisions = (sum(all_decisions) / self.n_estimators) return decisions
'Predict regression target for X. The predicted regression target of an input sample is computed as the mean predicted regression targets of the estimators in the ensemble. Parameters X : {array-like, sparse matrix} of shape = [n_samples, n_features] The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. Returns y : array of shape = [n_samples] The predicted values.'
def predict(self, X):
check_is_fitted(self, 'estimators_features_') X = check_array(X, accept_sparse=['csr', 'csc']) (n_jobs, n_estimators, starts) = _partition_estimators(self.n_estimators, self.n_jobs) all_y_hat = Parallel(n_jobs=n_jobs, verbose=self.verbose)((delayed(_parallel_predict_regression)(self.estimators_[starts[i]:starts[(i + 1)]], self.estimators_features_[starts[i]:starts[(i + 1)]], X) for i in range(n_jobs))) y_hat = (sum(all_y_hat) / self.n_estimators) return y_hat
'Check the estimator and set the base_estimator_ attribute.'
def _validate_estimator(self):
super(BaggingRegressor, self)._validate_estimator(default=DecisionTreeRegressor())
'Get parameters for this estimator. Parameters deep : boolean, optional If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns params : mapping of string to any Parameter names mapped to their values.'
def get_params(self, deep=True):
return self._get_params('steps', deep=deep)
'Set the parameters of this estimator. Valid parameter keys can be listed with ``get_params()``. Returns self'
def set_params(self, **kwargs):
self._set_params('steps', **kwargs) return self
'Fit the model Fit all the transforms one after the other and transform the data, then fit the transformed data using the final estimator. Parameters X : iterable Training data. Must fulfill input requirements of first step of the pipeline. y : iterable, default=None Training targets. Must fulfill label requirements for all steps of the pipeline. **fit_params : dict of string -> object Parameters passed to the ``fit`` method of each step, where each parameter name is prefixed such that parameter ``p`` for step ``s`` has key ``s__p``. Returns self : Pipeline This estimator'
def fit(self, X, y=None, **fit_params):
(Xt, fit_params) = self._fit(X, y, **fit_params) if (self._final_estimator is not None): self._final_estimator.fit(Xt, y, **fit_params) return self
'Fit the model and transform with the final estimator Fits all the transforms one after the other and transforms the data, then uses fit_transform on transformed data with the final estimator. Parameters X : iterable Training data. Must fulfill input requirements of first step of the pipeline. y : iterable, default=None Training targets. Must fulfill label requirements for all steps of the pipeline. **fit_params : dict of string -> object Parameters passed to the ``fit`` method of each step, where each parameter name is prefixed such that parameter ``p`` for step ``s`` has key ``s__p``. Returns Xt : array-like, shape = [n_samples, n_transformed_features] Transformed samples'
def fit_transform(self, X, y=None, **fit_params):
last_step = self._final_estimator (Xt, fit_params) = self._fit(X, y, **fit_params) if hasattr(last_step, 'fit_transform'): return last_step.fit_transform(Xt, y, **fit_params) elif (last_step is None): return Xt else: return last_step.fit(Xt, y, **fit_params).transform(Xt)
'Apply transforms to the data, and predict with the final estimator Parameters X : iterable Data to predict on. Must fulfill input requirements of first step of the pipeline. Returns y_pred : array-like'
@if_delegate_has_method(delegate='_final_estimator') def predict(self, X):
Xt = X for (name, transform) in self.steps[:(-1)]: if (transform is not None): Xt = transform.transform(Xt) return self.steps[(-1)][(-1)].predict(Xt)
'Applies fit_predict of last step in pipeline after transforms. Applies fit_transforms of a pipeline to the data, followed by the fit_predict method of the final estimator in the pipeline. Valid only if the final estimator implements fit_predict. Parameters X : iterable Training data. Must fulfill input requirements of first step of the pipeline. y : iterable, default=None Training targets. Must fulfill label requirements for all steps of the pipeline. **fit_params : dict of string -> object Parameters passed to the ``fit`` method of each step, where each parameter name is prefixed such that parameter ``p`` for step ``s`` has key ``s__p``. Returns y_pred : array-like'
@if_delegate_has_method(delegate='_final_estimator') def fit_predict(self, X, y=None, **fit_params):
(Xt, fit_params) = self._fit(X, y, **fit_params) return self.steps[(-1)][(-1)].fit_predict(Xt, y, **fit_params)
'Apply transforms, and predict_proba of the final estimator Parameters X : iterable Data to predict on. Must fulfill input requirements of first step of the pipeline. Returns y_proba : array-like, shape = [n_samples, n_classes]'
@if_delegate_has_method(delegate='_final_estimator') def predict_proba(self, X):
Xt = X for (name, transform) in self.steps[:(-1)]: if (transform is not None): Xt = transform.transform(Xt) return self.steps[(-1)][(-1)].predict_proba(Xt)
'Apply transforms, and decision_function of the final estimator Parameters X : iterable Data to predict on. Must fulfill input requirements of first step of the pipeline. Returns y_score : array-like, shape = [n_samples, n_classes]'
@if_delegate_has_method(delegate='_final_estimator') def decision_function(self, X):
Xt = X for (name, transform) in self.steps[:(-1)]: if (transform is not None): Xt = transform.transform(Xt) return self.steps[(-1)][(-1)].decision_function(Xt)
'Apply transforms, and predict_log_proba of the final estimator Parameters X : iterable Data to predict on. Must fulfill input requirements of first step of the pipeline. Returns y_score : array-like, shape = [n_samples, n_classes]'
@if_delegate_has_method(delegate='_final_estimator') def predict_log_proba(self, X):
Xt = X for (name, transform) in self.steps[:(-1)]: if (transform is not None): Xt = transform.transform(Xt) return self.steps[(-1)][(-1)].predict_log_proba(Xt)
'Apply transforms, and transform with the final estimator This also works where final estimator is ``None``: all prior transformations are applied. Parameters X : iterable Data to transform. Must fulfill input requirements of first step of the pipeline. Returns Xt : array-like, shape = [n_samples, n_transformed_features]'
@property def transform(self):
if (self._final_estimator is not None): self._final_estimator.transform return self._transform
'Apply inverse transformations in reverse order All estimators in the pipeline must support ``inverse_transform``. Parameters Xt : array-like, shape = [n_samples, n_transformed_features] Data samples, where ``n_samples`` is the number of samples and ``n_features`` is the number of features. Must fulfill input requirements of last step of pipeline\'s ``inverse_transform`` method. Returns Xt : array-like, shape = [n_samples, n_features]'
@property def inverse_transform(self):
for (name, transform) in self.steps: if (transform is not None): transform.inverse_transform return self._inverse_transform
'Apply transforms, and score with the final estimator Parameters X : iterable Data to predict on. Must fulfill input requirements of first step of the pipeline. y : iterable, default=None Targets used for scoring. Must fulfill label requirements for all steps of the pipeline. sample_weight : array-like, default=None If not None, this argument is passed as ``sample_weight`` keyword argument to the ``score`` method of the final estimator. Returns score : float'
@if_delegate_has_method(delegate='_final_estimator') def score(self, X, y=None, sample_weight=None):
Xt = X for (name, transform) in self.steps[:(-1)]: if (transform is not None): Xt = transform.transform(Xt) score_params = {} if (sample_weight is not None): score_params['sample_weight'] = sample_weight return self.steps[(-1)][(-1)].score(Xt, y, **score_params)
'Get parameters for this estimator. Parameters deep : boolean, optional If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns params : mapping of string to any Parameter names mapped to their values.'
def get_params(self, deep=True):
return self._get_params('transformer_list', deep=deep)
'Set the parameters of this estimator. Valid parameter keys can be listed with ``get_params()``. Returns self'
def set_params(self, **kwargs):
self._set_params('transformer_list', **kwargs) return self
'Generate (name, est, weight) tuples excluding None transformers'
def _iter(self):
get_weight = (self.transformer_weights or {}).get return ((name, trans, get_weight(name)) for (name, trans) in self.transformer_list if (trans is not None))
'Get feature names from all transformers. Returns feature_names : list of strings Names of the features produced by transform.'
def get_feature_names(self):
feature_names = [] for (name, trans, weight) in self._iter(): if (not hasattr(trans, 'get_feature_names')): raise AttributeError(('Transformer %s (type %s) does not provide get_feature_names.' % (str(name), type(trans).__name__))) feature_names.extend([((name + '__') + f) for f in trans.get_feature_names()]) return feature_names
'Fit all transformers using X. Parameters X : iterable or array-like, depending on transformers Input data, used to fit transformers. y : array-like, shape (n_samples, ...), optional Targets for supervised learning. Returns self : FeatureUnion This estimator'
def fit(self, X, y=None):
self._validate_transformers() transformers = Parallel(n_jobs=self.n_jobs)((delayed(_fit_one_transformer)(trans, X, y) for (_, trans, _) in self._iter())) self._update_transformer_list(transformers) return self
'Fit all transformers, transform the data and concatenate results. Parameters X : iterable or array-like, depending on transformers Input data to be transformed. y : array-like, shape (n_samples, ...), optional Targets for supervised learning. Returns X_t : array-like or sparse matrix, shape (n_samples, sum_n_components) hstack of results of transformers. sum_n_components is the sum of n_components (output dimension) over transformers.'
def fit_transform(self, X, y=None, **fit_params):
self._validate_transformers() result = Parallel(n_jobs=self.n_jobs)((delayed(_fit_transform_one)(trans, weight, X, y, **fit_params) for (name, trans, weight) in self._iter())) if (not result): return np.zeros((X.shape[0], 0)) (Xs, transformers) = zip(*result) self._update_transformer_list(transformers) if any((sparse.issparse(f) for f in Xs)): Xs = sparse.hstack(Xs).tocsr() else: Xs = np.hstack(Xs) return Xs
'Transform X separately by each transformer, concatenate results. Parameters X : iterable or array-like, depending on transformers Input data to be transformed. Returns X_t : array-like or sparse matrix, shape (n_samples, sum_n_components) hstack of results of transformers. sum_n_components is the sum of n_components (output dimension) over transformers.'
def transform(self, X):
Xs = Parallel(n_jobs=self.n_jobs)((delayed(_transform_one)(trans, weight, X) for (name, trans, weight) in self._iter())) if (not Xs): return np.zeros((X.shape[0], 0)) if any((sparse.issparse(f) for f in Xs)): Xs = sparse.hstack(Xs).tocsr() else: Xs = np.hstack(Xs) return Xs
'Fit the model with X. Samples random projection according to n_features. Parameters X : {array-like, sparse matrix}, shape (n_samples, n_features) Training data, where n_samples in the number of samples and n_features is the number of features. Returns self : object Returns the transformer.'
def fit(self, X, y=None):
X = check_array(X, accept_sparse='csr') random_state = check_random_state(self.random_state) n_features = X.shape[1] self.random_weights_ = (np.sqrt((2 * self.gamma)) * random_state.normal(size=(n_features, self.n_components))) self.random_offset_ = random_state.uniform(0, (2 * np.pi), size=self.n_components) return self
'Apply the approximate feature map to X. Parameters X : {array-like, sparse matrix}, shape (n_samples, n_features) New data, where n_samples in the number of samples and n_features is the number of features. Returns X_new : array-like, shape (n_samples, n_components)'
def transform(self, X):
check_is_fitted(self, 'random_weights_') X = check_array(X, accept_sparse='csr') projection = safe_sparse_dot(X, self.random_weights_) projection += self.random_offset_ np.cos(projection, projection) projection *= (np.sqrt(2.0) / np.sqrt(self.n_components)) return projection
'Fit the model with X. Samples random projection according to n_features. Parameters X : array-like, shape (n_samples, n_features) Training data, where n_samples in the number of samples and n_features is the number of features. Returns self : object Returns the transformer.'
def fit(self, X, y=None):
X = check_array(X) random_state = check_random_state(self.random_state) n_features = X.shape[1] uniform = random_state.uniform(size=(n_features, self.n_components)) self.random_weights_ = ((1.0 / np.pi) * np.log(np.tan(((np.pi / 2.0) * uniform)))) self.random_offset_ = random_state.uniform(0, (2 * np.pi), size=self.n_components) return self
'Apply the approximate feature map to X. Parameters X : array-like, shape (n_samples, n_features) New data, where n_samples in the number of samples and n_features is the number of features. All values of X must be strictly greater than "-skewedness". Returns X_new : array-like, shape (n_samples, n_components)'
def transform(self, X):
check_is_fitted(self, 'random_weights_') X = as_float_array(X, copy=True) X = check_array(X, copy=False) if (X <= (- self.skewedness)).any(): raise ValueError('X may not contain entries smaller than -skewedness.') X += self.skewedness np.log(X, X) projection = safe_sparse_dot(X, self.random_weights_) projection += self.random_offset_ np.cos(projection, projection) projection *= (np.sqrt(2.0) / np.sqrt(self.n_components)) return projection
'Set parameters.'
def fit(self, X, y=None):
X = check_array(X, accept_sparse='csr') if (self.sample_interval is None): if (self.sample_steps == 1): self.sample_interval_ = 0.8 elif (self.sample_steps == 2): self.sample_interval_ = 0.5 elif (self.sample_steps == 3): self.sample_interval_ = 0.4 else: raise ValueError('If sample_steps is not in [1, 2, 3], you need to provide sample_interval') else: self.sample_interval_ = self.sample_interval return self
'Apply approximate feature map to X. Parameters X : {array-like, sparse matrix}, shape = (n_samples, n_features) Returns X_new : {array, sparse matrix}, shape = (n_samples, n_features * (2*sample_steps + 1)) Whether the return value is an array of sparse matrix depends on the type of the input X.'
def transform(self, X):
msg = '%(name)s is not fitted. Call fit to set the parameters before calling transform' check_is_fitted(self, 'sample_interval_', msg=msg) X = check_array(X, accept_sparse='csr') sparse = sp.issparse(X) if ((X.data if sparse else X) < 0).any(): raise ValueError('Entries of X must be non-negative.') transf = (self._transform_sparse if sparse else self._transform_dense) return transf(X)
'Fit estimator to data. Samples a subset of training points, computes kernel on these and computes normalization matrix. Parameters X : array-like, shape=(n_samples, n_feature) Training data.'
def fit(self, X, y=None):
X = check_array(X, accept_sparse='csr') rnd = check_random_state(self.random_state) n_samples = X.shape[0] if (self.n_components > n_samples): n_components = n_samples warnings.warn('n_components > n_samples. This is not possible.\nn_components was set to n_samples, which results in inefficient evaluation of the full kernel.') else: n_components = self.n_components n_components = min(n_samples, n_components) inds = rnd.permutation(n_samples) basis_inds = inds[:n_components] basis = X[basis_inds] basis_kernel = pairwise_kernels(basis, metric=self.kernel, filter_params=True, **self._get_kernel_params()) (U, S, V) = svd(basis_kernel) S = np.maximum(S, 1e-12) self.normalization_ = np.dot((U / np.sqrt(S)), V) self.components_ = basis self.component_indices_ = inds return self
'Apply feature map to X. Computes an approximate feature map using the kernel between some training points and X. Parameters X : array-like, shape=(n_samples, n_features) Data to transform. Returns X_transformed : array, shape=(n_samples, n_components) Transformed data.'
def transform(self, X):
check_is_fitted(self, 'components_') X = check_array(X, accept_sparse='csr') kernel_params = self._get_kernel_params() embedded = pairwise_kernels(X, self.components_, metric=self.kernel, filter_params=True, **kernel_params) return np.dot(embedded, self.normalization_.T)
'Fits a Minimum Covariance Determinant with the FastMCD algorithm. Parameters X : array-like, shape = [n_samples, n_features] Training data, where n_samples is the number of samples and n_features is the number of features. y : not used, present for API consistence purpose. Returns self : object Returns self.'
def fit(self, X, y=None):
X = check_array(X, ensure_min_samples=2, estimator='MinCovDet') random_state = check_random_state(self.random_state) (n_samples, n_features) = X.shape if ((linalg.svdvals(np.dot(X.T, X)) > 1e-08).sum() != n_features): warnings.warn('The covariance matrix associated to your dataset is not full rank') (raw_location, raw_covariance, raw_support, raw_dist) = fast_mcd(X, support_fraction=self.support_fraction, cov_computation_method=self._nonrobust_covariance, random_state=random_state) if self.assume_centered: raw_location = np.zeros(n_features) raw_covariance = self._nonrobust_covariance(X[raw_support], assume_centered=True) precision = linalg.pinvh(raw_covariance) raw_dist = np.sum((np.dot(X, precision) * X), 1) self.raw_location_ = raw_location self.raw_covariance_ = raw_covariance self.raw_support_ = raw_support self.location_ = raw_location self.support_ = raw_support self.dist_ = raw_dist self.correct_covariance(X) self.reweight_covariance(X) return self
'Apply a correction to raw Minimum Covariance Determinant estimates. Correction using the empirical correction factor suggested by Rousseeuw and Van Driessen in [RVD]_. Parameters data : array-like, shape (n_samples, n_features) The data matrix, with p features and n samples. The data set must be the one which was used to compute the raw estimates. References .. [RVD] `A Fast Algorithm for the Minimum Covariance Determinant Estimator, 1999, American Statistical Association and the American Society for Quality, TECHNOMETRICS` Returns covariance_corrected : array-like, shape (n_features, n_features) Corrected robust covariance estimate.'
def correct_covariance(self, data):
correction = (np.median(self.dist_) / chi2(data.shape[1]).isf(0.5)) covariance_corrected = (self.raw_covariance_ * correction) self.dist_ /= correction return covariance_corrected
'Re-weight raw Minimum Covariance Determinant estimates. Re-weight observations using Rousseeuw\'s method (equivalent to deleting outlying observations from the data set before computing location and covariance estimates) described in [RVDriessen]_. Parameters data : array-like, shape (n_samples, n_features) The data matrix, with p features and n samples. The data set must be the one which was used to compute the raw estimates. References .. [RVDriessen] `A Fast Algorithm for the Minimum Covariance Determinant Estimator, 1999, American Statistical Association and the American Society for Quality, TECHNOMETRICS` Returns location_reweighted : array-like, shape (n_features, ) Re-weighted robust location estimate. covariance_reweighted : array-like, shape (n_features, n_features) Re-weighted robust covariance estimate. support_reweighted : array-like, type boolean, shape (n_samples,) A mask of the observations that have been used to compute the re-weighted robust location and covariance estimates.'
def reweight_covariance(self, data):
(n_samples, n_features) = data.shape mask = (self.dist_ < chi2(n_features).isf(0.025)) if self.assume_centered: location_reweighted = np.zeros(n_features) else: location_reweighted = data[mask].mean(0) covariance_reweighted = self._nonrobust_covariance(data[mask], assume_centered=self.assume_centered) support_reweighted = np.zeros(n_samples, dtype=bool) support_reweighted[mask] = True self._set_covariance(covariance_reweighted) self.location_ = location_reweighted self.support_ = support_reweighted X_centered = (data - self.location_) self.dist_ = np.sum((np.dot(X_centered, self.get_precision()) * X_centered), 1) return (location_reweighted, covariance_reweighted, support_reweighted)
'Fit the EllipticEnvelope model with X. Parameters X : numpy array or sparse matrix of shape [n_samples, n_features] Training data y : (ignored)'
def fit(self, X, y=None):
super(EllipticEnvelope, self).fit(X) self.threshold_ = sp.stats.scoreatpercentile(self.dist_, (100.0 * (1.0 - self.contamination))) return self
'Compute the decision function of the given observations. Parameters X : array-like, shape (n_samples, n_features) raw_values : bool Whether or not to consider raw Mahalanobis distances as the decision function. Must be False (default) for compatibility with the others outlier detection tools. Returns decision : array-like, shape (n_samples, ) Decision function of the samples. It is equal to the Mahalanobis distances if `raw_values` is True. By default (``raw_values=False``), it is equal to the cubic root of the shifted Mahalanobis distances. In that case, the threshold for being an outlier is 0, which ensures a compatibility with other outlier detection tools such as the One-Class SVM.'
def decision_function(self, X, raw_values=False):
check_is_fitted(self, 'threshold_') X = check_array(X) mahal_dist = self.mahalanobis(X) if raw_values: decision = mahal_dist else: transformed_mahal_dist = (mahal_dist ** 0.33) decision = ((self.threshold_ ** 0.33) - transformed_mahal_dist) return decision